Programming Arduino with Altair Embed
Tracking System Design for a Robot Car using Altair Embed
This presentation is part of the 1st Maker Mind Meld Summit that took place from December 7 to December 13, 2019.
In this presentation, Richard Kolk will apply the model-based design methodology to develop two tracking control systems for the Makeblock mBot Robot Car using the Altair Embed application. The control systems will be designed, simulated, auto-code generated and tested on the mBot Car. The IR Remote will be used to dynamically switch between the two tracking control algorithms while the mBot Car is running.
About the speaker
Richard Kolk works for Altair Engineering, specializing in the Embed product, formerly named VisSim. His background is in model-based design of automatic control systems. At Altair, he uses the Embed product to model system dynamics, to develop control systems, and, using its automatic C code generator, to run Hardware In the Loop (HIL) performance testing on a variety of microcontrollers. Before joining Altair, he worked at McDonnell Douglas (now Boeing), the United Technologies Research Center (UTRC), Carrier Air Conditioning, and the Goodrich Wheel and Brake Division (now part of United Technologies). While at McDonnell Douglas, he became interested in model-based design and eventually began using VisSim. At Carrier, Richard’s Commercial Controls team embraced the model-based design method, resulting in significantly shorter design cycles with fewer defects in their microcontroller-based control products. Today he enjoys helping Altair Embed customers with their control projects and developing training material for Embed applications.
Presentation transcript
Peter Dalmaris: Hi, everyone, and welcome to this special Mastermind session. In this session, Rick Kolk, chief technical specialist at Altair will show us how to develop two tracking control systems for an Arduino-powered wheeled robot using the Model-based methodology and the Altair Embed application.
I'm Peter Dalmaris, an online educator and maker, author of Maker Education Revolution and founder at Tech Explorations. My mission is to help people learn electronics, programming, printed circuit board design and lots more. Most importantly, I want to help as many people as possible to enjoy the technology education adventures.
In this session, I'm excited to introduce Rick Kolk. As I said, Rick is a chief technical specialist at Altair, a company that provides software and cloud solutions in the areas of product development, high performance computing and data intelligence. Rick's focus is Embed, a development environment used to model system dynamics and develop control systems. Embed includes an automatic C code generator hardware in the loop , performance testing and works on a variety of microcontrollers including the Atmega’s that power the entry-level Arduino boards.
Prior to joining Altair, Rick worked at McDonnell Douglas which is now Boeing, the United Technologies research center, carrier, air conditioning and the Goodrich wheel and brake division which is now part of United Technologies. While he was at McDonnell Douglas, he became interested in Model-based design, which is the development paradigm on which Altair Embed is based.
In today's Mastermind Session, Rick will show us how we can use Embed and it's underlined Model-based methodology to create two tracking control systems for a robot. If you've tried something like this, then you'll know that it's not a trivial thing to do. Rick will do it using a Model-based programming environment instead of a traditional text-based environment. Rick, thank you for joining me today. How are you?
Richard Kolk: Hi, Peter. I'm very good thank you.
Peter: Awesome.
Richard: Thanks for the introduction.
Peter: No problem at all. I'm eager to dive into your presentations, but just before we do, I'd like to ask you something about Model-based development. How does it compare to the familiar text-based application developing environments and why should we consider learning it?
Richard: Model-based development is like a much larger view of the software that you're designing. It differs from traditional programming in that it gives you the ability to not only design at very sub-function level but also at a macro-system level. The ability to reuse these self-documenting block diagram models that are created allows you to go directly from a simulated test on your computer to absorbing different hardware components in real time into that same simulation. I'll be showing that today so you get an idea of what's happening.
Peter: Yes. Could you give us some use cases? Where would Model-based Design methodologies best at? What are some of the hallmark applications for such a development methodology?
Richard: Well, you can use in any kind of systems, but it really excels for more complicated systems. Those systems have many subsystems, different types of communications that might interface with them, various modes of operation. For example, on an airplane, anti-skid system, you have a number of operating modes that have to switch and respond very rapidly and obviously safely. Model-based design is used extensively in that and other aero applications, automotive applications, commercial white good applications. You find it pretty much everywhere.
Peter: When things get complicated, then Model-based methodology is a tool that is worth considering at least, right?
Richard: Absolutely. I think it's an essential tool in this day and age.
Peter: Great, well thanks for that Rick. The virtual floor is yours. Take it away.
Richard: Okay. All right, I'm going to switch over to about a dozen slides that I have that are going to walk through this process along with a few videos. I'll be talking about the make block mBot car in this example. We'll design a couple of tracking control systems for this and simulate them and bring them right out to hardware on the loop and run this car on a figure eight track and see how well it performs.
The objectives, like I mentioned, are to apply the Model-based design procedure are using the Altair Embed application. I will use that to design, simulate, produce automatic C code, and then download that code onto the mBot car and run it.
The Altair Embed block diagram-based application does a couple of things that are worth mentioning at this point. First bullet here, you can model and simulate or solve equations represented as block diagram models. Secondly, you can automatically generate C code from any of these simulation models or block diagrams. Normally, no hand coding is required. Sometimes, in some obscure cases, you may need to. Thirdly, you can execute the other AutoCodeGen models on microcontroller s or we call them targets. The target we'll be using here is the Arduino Uno microcontroller which is the brains that's in this little car that you see.
We can execute that code on the microcontroller in two matters. One is called the hardware and the loop mode. The second is the standalone mode. When you run the hardware on the loop mode, we actually have a wire connecting the car back to our PC computer. It's used more or less to debug the algorithm before you let it loose in that standalone type mode. You have the ability to send and receive information from that car at high speeds using what's called a JTAG interface. That interface, all the communications is included with the Embed product.
Let me just spend a minute, this is a busy slide and I'll try to just point to the pattern and items here. I wanted to give you a comparison of what Model-based design is compared to a traditional software approach. This is an elaboration of a V diagram that you may or may not be familiar with. The fitted areas here, the pink and the blue represent the preliminary design phase and the EVP which is the Engineering Validation Phase. In the traditional design, what happens is, you begin with requirements, looking up here in the upper left corner of this slide here. You deal with requirements, you develop your handwritten code, you're using primarily debugger then to iterate on the code to get the requirements to work.
It's seldom that you're really able to be certain that you've met all the requirements at a system-level and also seldom to be sure that your handwritten code operates as you expect it to on a target processor which really doesn't become available until you get under this Engineering Validation Phase. The difference between the traditional approach and the Model-based design approach I'd shown over here on the right part of the screen, if you look on the pink are where it says preliminary design phase.
Not only do you do simulated models and Model-based design, but you also do Automatic CodeGen models which are models that just their very nature, they reside on either a target processor that is connected possibly to a model that is run as a simulation model on your PC or on that with additional hardware on the loop component. You're able to absorb more and more of the hardware via preliminary design. The result is that you produce a design that not only meets requirements, but you're quite sure that it's going to run on the processor with the constraints of possibly the actuators and sensors and other system constraints that will be present.
You've done all that during the preliminary design phase. The takeaway on this slide is you test early, you fail early and you fix it early. That's what Model-based Design is about.
Peter: Rick, can I ask a quick question here?
Richard: Sure.
Peter: In the pink books on the right side of the preliminary design phase, at that point, the model of the application that you're building knows about the actual constraints of the hardware because that hardware is also modeled, is that right? The models of the individual components, the hardware components that constraints as well are modeled?
Richard: Well, you would begin with models of the components, but the uniqueness of Model-based design is that you will actually absorb those in as hardware components. For example, you would include the target micro in this, so you'd actually run another simulated version of your controller code, instead, you generate code and run it on that target, and that's exactly what you've used as a product.
Peter: That’s hardware in the loop that we get because the hardware in the loop feature of the design process. Thank you.
Richard: What I'm going to do is I’ll try to categorize the Model-based design into four steps here that we can follow a little bit more easily than this diagram that’s got a lot of errors on it. These steps I've outlined over here on the right-hand side of this slide, the Model-based design steps. I'll just discuss these as we go through them and let you know when we've completed the steps.
The first thing that you need in Model-based design or any good design is you need to know what it is that you're trying to control. We need to put together some equations that govern how this little robot car is going to behave. What I've done here, we don't really have to go into detail on these, but just to give you an idea, this is a photograph of the car on the upper left, and what I've done is attempted to model at a very simple level the car equations of motion in terms of the centroid of the car which I've represented here.
You can actually see it in this middle photo on the left. I'm sorry, I’ve the photo of the image and that's what we're calculating here at the alpha angle, it's the angle of attack, if you will from horizontal positive downward. Then the x and y centroid locations from an absolute reference, I use an upper left-hand corner of the screen as the 00 reference for this, just if you're interested.
Then below this is the IR sensor location. Once we have the centroid, we can calculate, you notice on the photograph, the little IR sensors are those, I'm pointing to them on the car itself, and they're just kind of blown up here and the lowest image on the left, red and green. We're trying to locate where those are in relation to the centroids. That's what that sensor left and sensor right are. Those are right and left on the body axes of the car.
These equations, that's fine, we have these done, and now what we'll do is we'll convert these into a block diagram implementation. I’m not going to go into a lot of detail here because it won't benefit anybody at this point, but the plant models are encapsulated on the left in that circled red block. All this is a hierarchy, so really, the top level of the hierarchy is this first block that says robot and IR sensor models. That block receives two inputs, two velocities, a left wheel and a right wheel, and it produces an IR sensor signal left and right. This is my model, and below that, I've broken that robot and IR sensor models into two separate models. One being the plant model which were the three equations from the previous page, and the other is the IR sensor model.
These are represented in these circled block diagrams below them. They have implemented the alpha centroid calculations on the left and the x left, wide left, x right, and y right IR central locations on the right.
At this point, I have a little model, a mathematical model of what this car should behave like and what the IR sensors should read. What we'll do is we'll take this little model, and we'll run it as a simulated model. When I say a simulation model, it means it's running entirely on your PC, there's no connection, anything external. What I did, just to try to exercise the model, was I created a little animation. Again, you can do this with the Embed code or easily and I'm going to manually click on these run buttons that you see here. These will run and stop either the wheels. I'm going to do that in real-time, and we're going to see how the model behaves when I change the speed of each of the wheels.
We'll start the animation, and what I'm going to do is let it play, but I'll stop it occasionally. What you're seeing now is that model. Again, this is a simulated model, and these are the two buttons, the run, and the stop and you're going to see me exercise these buttons and I want you to see what the car does and also make note of what the IR sensor left and right are outputting. They should output a one if they're on the black track and they should output a zero if they're on a white background.
Here I move the car off by clicking the button on these sensors and it's a little bit awkward to do, and I'm able to move the car back and forth over the track. I can see what the IR sensors are producing. I stop, I just driving the car completely off the track.
Let me close this window, and make sure I close the right window. That's fine. That wasn't too exciting an animation, was very good one either, but what I'm going to do next is I'm going to replace this kind of awkward interface that I have using the mouse to control this with a real-time interface. What I'm going to do is I'm going to use a joystick connected to an Arduino and I'm going to modify the signals on the joystick. These are two potentiometer signals that come out of the joystick. I'm going to take those and I'm going to use one axes or one potentiometer to control the right wheel speed and the other one to control the left wheel speed. I'll be able to do this myself, just sitting here at my desk by connecting this Arduino Uno and replacing those two buttons that are going in to the plant simulation.
The actual block diagram that processes the Arduino I've shown down here in the lower right corner of the screen. What we're doing is receiving an analog input and I've just blown up one of these convert two-wheel speed blocks. Each one is receiving an analog input on channel zero and channel one. I've blown up the analog inputs over here so you can actually see what they look like when you configure a peripheral using the Altair Embed application. Altair Embed, when we say that we support a microcontroller, we support its entirety of peripherals, so every peripheral is supported and you access some I windows similar to this ADC input properties window. PWM, it’s general-purpose IOs, ADCs communication cam, Spy, I²C are all supported in the same way.
They're configured to a window like this, and then we would include the functioning block and your diagram, and it would transmit or receive data. Before we actually plugged this in, let's make sure the joystick model works. What I've done is created a very brief video showing this model, and I'm going to just stop for a second. This is a simulation model that you're looking at, and that means that it's running on the PC, but what I'm going to do is I'm going to create code from this joystick model that I just showed you the contents of and that will produce what's called an Auto Code Gen model, which is the type model we can load onto a target and run.
Let me continue and I'll show you the process for generating code for a model like this. You first select it, then you go into a couple of windows and our application for Embed and this will allow you to create code for any block that you select.
That's what we're going to do here, we're going to compile it. Underneath this target list, are upwards of about 400 different microcontrollers that we support many different families. The Arduino is just one of them. Let's continue with this. I'm creating the code here, and what I'm going to do is I'm going to include the code now in an Auto Code Gen model, and the way that I communicate with that is through something called a Target Interface Block. It's this white block here that's actually connected now to the Arduino through that cable connection, and the joystick.
What's going to happen is I'm going to be moving the joystick on my desk here, and this model called the joysticktest.elf will capture those values, do the scaling that I provided in this red block up here that you're seeing. It will plot the resulting signals out on this time history plot. Now when I run this model, it's going to link it and download it under the Arduino. Then we see the Arduino here, and you'll notice that as I move the joystick with my finger, you can see the response to the two potentiometers here being cluttered in real time.
Something that is very important to notice here is that Embed has the ability to synchronize its simulated time with real time. We can actually synchronize any operations that are occurring in our simulated model with what's happening in reality. That enables us to very easily to put hardware components into a control loop and run them. You get the idea. I'm just moving the joystick in two different directions and it's responding in this plot showing me what it does.
At this point, I know the joystick works. What I'll do is I will replace the buttons, the cumbersome buttons that I had now with this joystick input and I'll try to control this car again. Now what I'm actually doing is running a hardware component, this Arduino with the joystick in the loop with a simulation model. They're connected with this HIL interface, we call it. This is a wired interface, it's a JTAG connection. Let's run this model.
Now what's happening is I have replaced the two buttons with that same model we just tested and made sure it worked and we're going to be controlling this simulation plant model here now with this joystick. You can see it's very easy for me to control it now and move it off the track and on the track and so on. The whole purpose of this exercise was just to make sure that plant model is good enough to design a control on. It does respond correctly. I think we have a decent plant model at this time. We'll proceed with the design of the controller.
For the first controller, I call this tracking controller one. The simulation model on this, I'll explain it, it's just shown on this page. Before we get to that, we have to have some requirements that we're designing to. The requirements that I wanted to satisfy with this exercise was two, there were two requirements. Number one I wanted the car to track a figure eight course without stopping. Secondly, I wanted the mBot car to regain tracking from an initial off the course position. That position means that both IR sensors were off the track on white background.
The first controller that I put together was shown that the four modes of operation here on the left and these four images. On the body axes of the car, this will be facing to the right where you would be facing right with them moving forward. Your left is the left side of the car and right is the right, and the front of the car are these two IR sensors.
There's four different conditions that can occur for these sensors, and I've tried to show these in an image here, they didn't come out all that well, but you get the idea. In this first image, both sensors are on the track and the second one, the left sensor's off the track and the right one's on it. In the third here, just the opposite's true; the right sensor is off and the left one's on it. Then the fourth, both are off the track.
These statements here, simply capture that condition. The IR sensor is on, on track. On track is, just like you would define it in a text language, on track is defined as one, and off track is defined as zero. The IR sensor left and IR sensor right are the inputs to this block, you see them right up here. They're coming in the block.
Then the output of each of these four cases is a one or a zero, it's either true or its false. I convert that true or false four different conditions, to a single tracking case just by multiplying them by a successively increasing integer and adding together. Now the tracking case will range from zero to four. Then this case block will output whatever input pin you specify through the case value.
For example, tracking case equals two, the output would be set to the pin two input. The pin two input here, these bold lines are vectors, and the narrow lines are scalars. This STD just means take two scalars and make them a vector. It's nothing more than just stacking the two values on top of one another. The only reason I do that is just for brevity here and a little simplicity and how this looks visually.
Peter: When you say pin 2 for example, you refer to the pins in the case block and it's not an Arduino pin. Correct?
Richard: Correct.
Peter: The terminology that you use to refer to the numbers here, the case block in particular?
Richard: In any block diagram, we also refer to these inputs and outputs as Pin numbers. They're numbered from top to bottom. You get the idea of what's happening, you're able to select and make certain commands then to the wheel velocities.
One other thing I just want to mention here is I plan to use this model for simulation testing, as well as hardware in the loop testing. When I do the hardware in the loop testing, I don't want to use, because I'm going to be commanding a PWM, which will accept a value from zero to one, which ranges from stop to full speed on the PWM's, I don't want the normal speed to be 15. This is a interesting little switch. This is a built-in switch that you can include any model and it says if you're doing Code Jam, if that Boolean is true, set the normal speed to 0.8 or 80% of your PWM value. If you're not, you're doing a simulation, set it to the bottom one, which is 15. I have the flexibility now of just including that and never have to touch this model. I can go right to hardware in the loop with the same model.
Peter: With that capability, the model, when it's running knows whether it's running in execution mode, or whether the hardware is in the loop so that you can get it to behave differently in either case?
Richard: Right. Correct. There's two types of models. There is, we call it a simulation model, which is everything you see here running on your computer. Then there's an AutoCodeGen model, and that's the model that runs on the hardware. What I'm going to do is try out this controller again. I need some way to disturb it, because it's not going to be very interesting to watch it, just travel down a straight line, it's not going to do anything and I don't like using the mouse buttons for it.
We'll just extend that whole joystick testing that we did on the plant model and we use the joystick to put in a speed disturbance to the controller while it's running the track in the simulated model, running the robot car down this track. What I'll be able to do is manually insert some speed disturbances to either wheel using the joystick. I selected the joystick to put out disturbance values that were twice the nominal or normal, as you guys call it here. That way I can ensure that I can actually drive it off the track.
This is, again, the joystick model that I use, a little bit different from the previous one, it's a bit simpler. It takes the two analog potentiometer inputs here and they're configured. What I'm doing, these are a little different blocks than what you've seen before. Embed supports, not only floating point models, but you can build fixed point models if your processor only supports fixed point. That's what these funny things are with this cue notation, that's just simply specifies this is a 16-bit number with 4-bits to the left of the radex point.
This is a whole error, we don't really need to go into this in detail. Now, just to understand that there are provisions in Embed to allow you to code for either type of processor, floating point or fixed point very efficiently.
Peter: Do you need to be aware of what kind of arithmetic your MCU supports? You obviously should be. Just thinking, let's say that we change the target from the Arduino to some other kind of microcontroller, like an ESP32. Can the Code Generation module adjust and use the kind of arithmetic that is available in the new target? Or should the model expect that--
Richard: The first time that I ran this model when I generated the joystick code for the AutoCodeGen model, do you remember I pointed to the CodeGen window and I said that there were hundreds of different targets we support. You just select it there, and it will generate code specifically for that target.
Peter: All right, so the same model will work with any of the 400 available microcontroller units.
Richard: Generally, that's true, yes.
Peter: I'm sure there's exceptions because they are such a big variation in that hardware, but for most things, it can work. If it doesn't, do you get some kind of notification of, "This failed, you need to pay attention to this model or this component"?
Richard: Well, we provide some error checking on the model. If there's clearly an error that you've created in the block diagram model itself, that will be flagged. However, more often than not, you discover the error because it doesn't run right on the target you're trying. We have a variety of tools that allow you to interrogate that target. One is the CPU utilization. You're able to instrument, again, graphically, your block diagram model, which becomes the AutoCodeGen model, and investigate how much CPU time is being taken for individual functions or these blue blocks, the compound blocks that are around the target.
This is very valuable, a lot of times what you'll discover is it's a CPU usage problem that causes other problems. Not to say there's other problems you'll encounter. Certainly, it's a complicated area, but the tools that are provided are fairly powerful, and they'll get you through most of this graphically without reverting a hand code.
Peter: Awesome.
Richard: Okay, so now we have the Disturbance Model created. What we're going to do is we're going to attach the Disturbance Model that I just went over with a track and controller model that we also just went over. We're going to use our Plant Model that we discussed previously that we're sure is working now because we already tried it with the Joystick Model. The configuration is what you see here and this is important to see. The Tracking Controller is a feedback system. It's sensing as inputs, these two IR sensor states. It's taking those and it's producing a wheel velocity command, that's going back into the Plant Model to run the wheels.
What I've done over here on the far left, I've just biased that feedback with this joystick that I'm going to use to manually inject disturbances, to see how well the controller responds to disturbances. That's what you have to be sure is working right. Again, this is a blend of a simulation model, you're going to see running with a AutoCodeGen model running in synchronized time between the two. I'll set up a video. Let me just stop it for a second. The car is starting off here and I've placed it on the track, it's moving from left to right and the controller is doing a fine job keeping it on course.
What you'll see in a second is an image of me and my joystick here moving this joystick disturbance and moving the car off the track. If you look carefully, you can see this thing moving off the track as I'm moving the joystick, and you can see the controller compensating. It's doing a pretty good job so far. I get one of the IR sensors off, but then when I get both of them off, it just stops. That's how I designed it so it's not surprising, it's just stopped. [laughs]
Even though we haven't completed and met all the requirements, what I wanted to do is just proceed to the HIL testing. We can do a design iteration using Embed and Model-based design approach. Normally, at this step, you would go back and fix this, however, we're going to continue and just go through HIL testing, so you can see that. Before I get to HIL testing, I have to make sure that I can actually read the IR sensors and run the wheels on that car from a AutoCodeGen model.
Peter: Rick, sorry to interrupt you here, just to make sure I understand. What we've done so far, was to create a model in a fully graphical Model-based, model-driven method. We've tested it in software now, we've simulated it. We've connected the simulation to the real world because you used a real joystick to input commands and see how the model behaves. We still haven't tested it in the actual robot. The robot itself is simulated. What we'll do now is automatically generate the C code that is a firmware that you upload to the Arduino, to the real robot and then test the robot in real life. Is that what's happened and what's going to happen?
Richard: Exactly. You looked at these slides before I showed them.
[laughter]
Peter: Awesome. It's amazing.
Richard: That's what we're going to do. We have two intermediate steps we have to do here before we get to the point where we can run around the robot. The first one is we need to make sure that we can read the IR sensors correctly and the second, we have to make sure we can control the wheels on the robot correctly.
This slide here is just an AutoCodeGen model will be created for the reading the IR sensors. Now, the IR sensors, I just captured them into this block called read IR sensors and the contents of the block are shown down here below it.
You see it's just two digital inputs, they happen to be pin 9 and 10 on the Arduino. Down here, I've configured the pins. Again, this is an Embed window that allows you to configure digital inputs. All I've done is configure both templates. I didn't like the fact that they were complemented to what I was expecting in my model. That is, they weren't putting out a one and a zero when it was black and white. I had to compliment those myself before I created the IR sensor left and right signals. Let's see what this looks like doing this test, understand how it's working.
I have the same thing that I'm doing here. Let me just back up for a second. We're starting with the read IR sensor model, and what I've done here is I've created the AutoCodeGen model. You can just see it underneath this DOS block. That model is being loaded onto the target now. What I have is the car here, and I'm going to slide a piece of black paper underneath the IR sensors. If you look carefully down here, you notice, right between the ultrasonic sensors, there's two LEDs that are blue. They go on and off depending on whether the left or the right IR sensor is sensing black or white.
I'm just plotting those up here as well, so you can actually see the effect of me passing the black paper under them. They individually work and now I've just swept through them to make sure that they're working as I sweep from left to right or right to left. Okay, so now at this point, we have a decent model that's able to read these sensors.
The last thing I need to do is to make sure I can spin the wheels on this. Now, each wheel is controlled by two pins on the Arduino. One is a direction pin and the other one is a speed pin or a PWM pin. Again, Embed supports all the peripherals. We support the PWM directly with a special PWM block, as well as the digital output. The spin wheel command receives a value from zero to one, that's the PWM command.
That will transition the speed from a stop to full speed on each wheel. The contents of the spin wheel block, I've just put down here so you can see these. I confess I had to actually do this a couple of times to get it right. The directions are set correctly now, so the wheels do go in the right direction. They're always going forward. I can control their speed now by this lower block on each one.
There's a left wheel speed, PWM command that goes under this block called the Duty cycle Arduino PWM pin five and the right goes into the pin six. Down here on the bottom, you see what the embed window looks like to set up a PWM. I've just set up your Arduino pin here, was six for this one, it would have been five for that one. The last thing I need to do is to confirm that that spin real test does actually work. We'll run that same procedure.
[laughs] It's a bit-- I've created the target interface blockers already there. I'm just loading code under the auto code-gen model. That's going to be running now on the car. I'm controlling with these sliders and you're going to look in the car and see the individual wheels moving forward or back.
I'm plotting out here, the normalized speed for both wheels on this plot. Again, everything's in real-time, synchronized reality to the simulation model, which is obviously a very strong feature in that. Okay, so now we have everything working. We're going to take our tracking control system, the tracking control system one unaltered from how we tested it on the simulation model and I'm going to attach to it the read IR sensor model.
Over here, I'm just attaching the speed model. There really isn't much to attach there because, within the tracking controller, you recall that I was using this code-gen switch to switch between zero and 15 to zero to 0.8 for the speed that's being sent to the PMM. They're good to go as they are.
First thing I'll do is I'll run this model that you see here and I'm going to run on the desktop just so it doesn't run away from us and we can make sure it's working right. Let me show you that. At this point, what I'm doing is I have that model. This is a simulation model that's contained within this block. What I'm going to do is generate the auto code-gen model by selecting the block and going to the tools menu and saying "code-gen". You'll see me do that. We're going to compile it.
I'm going to download it onto the car and this will be a standalone model. Meaning aside from just the loading process of the software, we will not use this JTAG interface, this thing that you see in blue. I've got to turn the car on in order to load the executable model onto it, which I've done. We click download and the auto code-gen model downloads onto the car. Now I've got this car that's sitting on my desk running and I'm going to move the sheet of paper out from underneath the wheels. You got to look very carefully but you can see the wheels starting to turn. It looks like they're doing the right thing. It looks like it's safe to put it on the track.
That's what our next step is to take this thing over, carry it over and place it on the figure-eight track which I have right here in this video. You will see that working. I placed it on the track. I had this thing sitting out my bathroom and turn the car on and the controllers in there and there it goes. It's traveling around the tracks, not really exciting.
Peter: [laughs] It is for me.
Richard: We'll watch it for a minute. Kind of zips around there. We're going to see something shortly that's going to be disturbing about how the thing is working. You notice that it kind of continually jags off the track, but it's able to keep one eye sensor active until here, they both went off.
Peter: It stops.
Richard: it's how we designed it, and it stops. I put it back on, it keeps going. It stopped again. It's not a very good controller. We're going to redesign it now.
Peter: It seems there's iteration, right?
Richard: That's the first iteration, so now we're going--
Peter: It satisfies the two requirements that you had.
Richard: Partially, yes, it doesn't obviously crack, so that's a difference. [laughs]
Peter: We didn't do too.
Richard: We knew we weren't going to do well because-
Peter: It's when the two-- At the middle of the eight figure, I guess it's a special condition that you didn't consider when you were doing all this. It's a condition that appears only in this specific location in the track, so you've got to deal with it now.
Richard: If you vary the speed and you had a high curvature on the track, you're going to see this condition more and more. I've been thinking about it, if you think about it, it's just not a good design because there is no good condition for the state when both sensors go off the track, you can't do anything.
Peter: Yes, it just stops.
Richard: We have to do something.
Peter: Rick, just up before you continue, I just noticed that a couple of times, you made a reference to the JTAG interface. I just want to double-check that the owner doesn't have one of those. I guess it's working via the USB interface. Is that correct? Would you confirm that?
Richard: Sorry, yes. It was a serial interface. Correct. You're right.
Peter: You can do hardware interloop through just regular serial USB?
Richard: Yes. Of course, exactly.
Peter: Thanks.
Richard: This is our iteration. What I'm going to do is take the tracking controller 1 model, and I'm just going to redefine that tracking case four. I'll circle what I've done here. What I've done is I've done the following. This first piece of logic here keeps track of the last state that it tracked. Before the thing goes completely off the track, we'll know it was in stage two or three. Then what I do is go down here, so that samples and holds it. It's always there. It goes off the track and it takes the last valid state, if you will, that it was tracking.
Now what I do is I say, "Okay, you're off the track now. What was the last valid state? Was it two?" If it was two, then turn in the direction to put me back on the track. If it wasn't two, it had to be three, so turn in the other direction. That's why I've reversed these speeds here. That's going to go in and replace the "do nothing" from the tracking controller-1 on condition four and on that info.
Let's see how that works. Okay, first of all, I ran this as a simulation. Again, I used the joystick on it and I tried to move it off the track, see what happened. There you see I just got both, and they're off, it came back by itself. This is looking encouraging. It looks like this controller is going to be able to keep the thing on the track pretty well, we think at least, based on the simulation results.
We'll move right now to the car. We don't really need to do any desktop testing because we know the thing runs, we're not changing anything other than the algorithm itself that's running in the tracking controller. I just dropped that in here replacing tracking controller 1 with this and bring this up. [crosstalk]
Peter: Once you have the hardware modeled, you don't have to test it every time. Now you can just focus on the controlling algorithm, therefore we increase the iteration speed there.
Richard: Yes, you can do many design iterations. You can see how fast these are being done. Just the ability to generate the auto code-gen model so quickly and put it on to the car in the standalone mode, you can do in a matter of two or three minutes once you decided what the function is. You can make many design iterations very fine-tuning this thing to have it work very properly for you. What I did here was I went and I just loaded it on the car and the car is running now. It's under controller-1, it run off the track. What I did was I programmed the IR remote so it had three modes of control.
One was tracking controller-1, one was tracking controller-2, and the third model was stopped. I didn't include how I did a part of this presentation because it would have taken too long to go through the IR remote, but it's very straightforward to do and pretty essential when you want to try these. There's no other way to-- You don't have a wire on this thing, so you need to control it some way. Using the IR remote was a very convenient way. What I'm doing right here is tracking controller-1. Had been running, but went off the track, those sensors are on white.
Now I switch the controller to two. What happens is, the car is able to get back on the track from an at-rest position that's off the track and we'll see that it continues track and you'll get tired of watching this. It won't come off the track at this point. We have a successful design. Looks like we satisfied both requirements. We have a good tracking controller. We satisfied both requirements, we're still in the preliminary design phase, we know the thing runs on the micro that we intend to use on the product. We have a very low risk now going into the EVP stage.
Just to summarize what I've shown you. A couple of points, just take away points that are important for you. Aside from being a lot of fun, model-based design procedure, it's efficient, it's iterative, and it's repeatable. It's a very good approach for designing any semi to very complex embedded system, generating software for it and testing it. The hardware and the loop iterations are conducted earlier during the preliminary design phase instead of the engineering validation phase. That results in a couple of very key benefits, real benefits.
We're more certain that requirements are met earlier instead of finding them in the EVP stage. When they occur in the EVP stage, you've got to go all the way back to the preliminary design phase. It might entail a hardware modification because you're using actual manufacturer hardware at that point. Turn-backs are very expensive when you're doing them from the EVPs phase.
Overall then because you will have fewer turn-backs, you have a shorter development time and traditional non-MBD approach. You can predict when you're going to be able to enter into service much more accurately. When you're a large company doing releases, people want to know when's it going to be ready? When you're going to release it. You have a believable date then that you can publish at that point.
Peter: You can be confident.
Richard: Again, the takeaway here is test early, fail early and fix early. That's what model-based design is. Just in closing here I'll just mention the Altair Embed application is the tool I've used throughout all of this. It's available through this link that I've provided on the bottom. We can get some other contact information after this video.
Peter: Sure, great. Thank you, Rick. It was amazing. I've done something like this in a traditional way with Arduino C program. It was quite large. We tried to get it to stay on the line. I have used an mBot and I have used the Scratch language to code, something similar but the conditions that the robot could deal with to stay on the line were very limited. I could continue to add more blocks, more Scratch blocks or more C-code, but I can see how they can get out of hand quite quickly because then I'll just have to wait for the next condition to occur in real life, then go back to the application and deal with it, add to the rest as well. It can get fairly complicated.
One thing that I wanted to ask you in conclusion here is the learning curve for somebody who is interested to learn how to work with model-based methodologies and with Altair Embed, in particular. I saw, in the beginning, a few scary mathematics that I haven't seen since I was in engineering school. That is one thing. Then there are just have a lot of blocks there.
Just like in C language you need to learn the various functions and the primitives of the language and the control structure and all that. I guess there is something similar that you need to become familiar with and capable to work with in model-based languages like the one that Embed supports. Could you give us some I guess guidance about the learning curve and even resources or if people want to learn how to use model-based methodology to build something like what you've built with Altair Embed, what can they do? Where can they go?
Richard: The learning curve is shallow. I've actually used this for many years. I was a college instructor for about 20 years. At the time, it was a code called VisSim and very similar to the Embed application. I'm sure in a matter of fact, it's the same application, just improved and expanded in its capabilities. The beauty of this is it's fairly self-documenting. It's theoretically intuitive, and it's as complex as you want to make it.
You'll find that you can create very simple algorithms using very basic blocks. There's not a lot of blocks, there's not data types you need to remember, nothing like that. Essentially, it's putting a function in that's described by a block and a category and connecting a wire to it to send the data to it and from it.
What you'll find is, it is fairly easy to do. Altair does offer training courses. We do have a two-day training course that we offer that is a very good introduction to embedded programming using Embed. It's based on the Arduino, and it goes through a whole series of small, interesting problems that you can put together in hardware and use the Embed application to design controls or monitoring applications for them.
Peter: Great. I guess the fact that Embed supports a few hundred microcontrollers, does that mean that you can take the knowledge from that training or from the time that I can invest in learning Embed with my Arduino and then deploy the same models to other microcontrollers like Texas Instruments, for example, like those that are used in industrial applications and so on?
Richard: Sure, yes. Matter of fact, we support most of the TI families of chips. There's hundreds of individual chips that are supported in those families. We also support SD micro, Raspberry Pi and others I can't remember off the top my head. There's a very large collection of targets that we support with this.
Peter: It's not something that you can do otherwise. I can't really take my Arduino programming skills and use them directly with a Texas Instrument microcontroller. I would have to relearn quite a lot of the language. They're both uC, but the specifics are quite different. The architecture of the microcontroller itself is enough to require changes in all of my programs, which if I understand right, that's not the case with Embed. The same model essentially can be moved from one target to the other.
Richard: You're correct, yes. You don't need any sort of a deep knowledge of the intricacies of running the IDE for a specific micro family. Embed, it doesn't really care what micro you're using. You need to specify it, but other than that the code is just used across pretty much any micro that we support and unchanged.
Peter: Generated on the fly. Just click on the button and generate a code for the model.
Richard: Yes, you can see how fast the auto code-gen models are created. It's quite fast. Even larger models that I've been involved with, don't take longer than about 30 seconds to build a large code-gen model. These are big, these are-- Block counts can be-- Probably in the vicinity of maybe 4,000 would be considered a very large model. It's produced probably 15,000 lines of code from the code generator. Just to give you an idea. Certainly, it's a very robust tool and it's been stress-tested significantly with the larger systems in mind.
Peter: I wonder with the code generator and large complicated applications like military drones and things like that or industrial applications that can get quite large, can you trust that the code generator would actually produce good optimized code or do you then need to go and tweak it to improve things?
Richard: Normally, you do not have to tweak it to improve things. We spend a lot of time on code footprint, code efficiency, and we're constantly measuring how well and how fast and how little memory is taken. There a benchmark done on our code-gen by TI for a PWM code. I forgot what it was, maybe it was an ADC attached to PWM and what TI found was a fairly complicated code. We were within 4% on both footprint and speed of a hand-coded C with a- [crosstalk]
Peter: You can concentrate on the model as the program and you don't have to worry about what comes out the other end. You're supposed, obviously, to get the model right.
Richard: That's correct. You got to really focus on the controller design if that's what you're after doing here.
Peter: Just one more thing now, that you've got a link here on this slide. Is that the place where people can go if they want to get to the documentation download embed? Would that be the first place for our audience to look at?
Richard: I think that's the first place but like I said, I'd like to just follow up with you after we talked to Jim Ryan. He may have other links that he'd prefer that you use for that.
Peter: Sure, we'll include them into the presentation page as well. Okay, thank you, Rick. That was a really informative presentation. I really enjoyed it. I've played around with Embed in the past and I can say that you've taken it further than I thought it was possible. Especially with the way that you can integrate simulation with real control. Like how you had the joystick connected into the simulator, you were able to animate the model of the input of the robot and then control it and then see how virtually it behaves. I think that that was really amazing to see how they can save a lot of time when you are working to create, in this case, control algorithms.
As I said, I've created control algorithms on an mBot myself and other little robots that I've built, but the process was always, "Go to the Arduino IDE, write a little bit of code and upload it to the mBot, put it on the floor. See what it does, take notes. Go back to the Arduino IDE, make changes. Do the whole thing again."
Anyway, in your case, you can just use the animated version of the robot with some controls that can be connected to a joystick or you can virtualize them as well. You can have a fully virtual environment, how that can speed up the development. Thank you for that, I really enjoyed that. How can people get in touch with you or Altair if they want to ask questions and communicate with you?
Richard: Probably the easiest and best way would be to use the online discussion forum link that you're able to show there.
Peter: Let me just show you what it looks like. Altair has got a forum dedicated to Altair Embed, the year release up here. I'm going to have it in the notes as well. It's a discussion place. People talk about the projects, issues with model-based programming, and the development environment as well, Altair Embed. I think you're in here sometimes as well, Rick, aren't you?
Richard: I think.
Peter: You look there and you answer questions.
Richard: I'd check it off and you'll get a response from me or somebody on the team.
Peter: Yes, there’s Jim, there’s Isi, there's quite a few Altair people there that help users and programmers with their projects. Great, thank you. Awesome, again, thank you for your presentation, Rick. I wish you all a good day.
Richard: Very good. Thanks, Peter. Take care.
Altair Embed series
Altair Embed and Downloads
Altair provides additional educational resources and a free version of Altair Embed on its website.
In addition to the resources available in the Altair Embed page, SolidThinking Learning Center provide additional video tutorials to help you get started.
Last Updated 2 years ago.
We publish fresh content each week. Read how-to's on Arduino, ESP32, KiCad, Node-RED, drones and more. Listen to interviews. Learn about new tech with our comprehensive reviews. Get discount offers for our courses and books. Interact with our community. One email per week, no spam; unsubscribe at any time