Strict Standards: Non-static method timezone::is_dst() should not be called statically in /home4/sabre/public_html/textpattern/lib/txplib_misc.php on line 2355

Strict Standards: Non-static method timezone::is_supported() should not be called statically in /home4/sabre/public_html/textpattern/lib/txplib_misc.php on line 2523
Articles: SABRE: Behaviour-Based Wall Follower in NXT-G

Behaviour-Based Wall Follower in NXT-G

I’ve made a 45-minute-long tutorial video showing how to program a MINDSTORMS NXT Robot in a basic behaviour-based way — a great way to start with a simple robot program and work up to a complex-yet-still-manageable one. The video shows the concepts behind behaviour-based programming, and, in a step-by-step fashion, how to program the robot to do this in NXT-G, what the robot does, and how to debug the code to make the robot exhibit the desired behaviour.

Here are a couple of diagrams from the video:

Download

Here are the following choices for downloading the video. Depending upon your web browser, you may be able to click on an option and stream it, or you may want to right-click on the file and save it.

Format Large (820×614) Medium (512×384)
Quicktime [312 MB] [123 MB]
Ogg Theora [325 MB] [208 MB]

(If you aren’t sure which video format to choose, you probably want to go with the Quicktime video.)

The NXT-G source code demonstrated in the video can be downloaded here. Note that you’ll need the NXT-G 2.0 software to use them.

Share and Remix

Creative Commons License
Behaviour-Based Wall Follower in NXT-G by Clinton Blackmore is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 Canada License.

Leave Some Feedback

I’d love to know what you think. I believe the video explains some powerful principles that will help breath life into robots! Feel free to add a comment or send me an email.

Article History

2010-04-06 — Reworked a lot of the text, added pictures, added the CC license to the videos, and encoded and uploaded videos in ogg theora format.
2010-03-15 — I have placed the work under a creative commons license, granting you rights to redistribute, re-mix, translate, and adjust the work, so long as you give me credit for originally creating the work. (See above for details.)
2010-03-05 — Added a lower-quality version of the video, and added a .zip file containing the source files I worked on.

Posted by Clinton Blackmore - Friday March 5, 2010.
Posted in .

Comment

  1. Happy March Friday, Clinton!

    What an excellent “how to” video! I’d very much like to show this video to my FLL and FTC students. Your clear, concise descriptions, clean graphics, and excellent view of your coding process make this an easy to follow presentation of a complex, multi-process program. I’ve attempted for years to get my students past the linear “spaghetti” code you mentioned. Please let me know if it’s okay to use in my lab. I work in a school in Edmonds, Washington and support students in a hands-on lab. I’m happy to share the results and responses from students!

    ~cw

    Cathy Webb · Mar 12, 02:33 PM · #

  2. Cathy,

    Please, go right ahead and make use of the video. That’s what I made it for. Indeed, I’ll probably put it under a creative commons license.

    I hope your students find it to be useful.

    Clinton

    Clinton Blackmore · Mar 12, 08:02 PM · #

  3. Hi Clinton

    Thank you soooo much for this video. It’s easy to understand and also educates the students along the way on how to use the interface effectively. It has helped me to understand how to use the variable blocks and data wires in a short space of time (vital for a busy teacher). Reading above I am going to let my small robotics club students watch and construct this next week. I’ll let you know how they get on. Love the robot too.

    Debra Wood · Apr 22, 06:12 PM · #

  4. Debra,

    Glad to hear it! Wish your young roboticists the best of luck!

    Clinton

    Clinton Blackmore · Apr 22, 06:32 PM · #

  5. Hello Clinton,

    Thank you very much for creating this tutorial for NXT-G applications! It is amazing that I saw this on a very understandable and easy to follow tutorial.

    May I have a few questions for it too?

    - Does the arbiter loop always execute the selected state every iteration?

    - What is an efficient way to make the other triggers to be queued when a behavior is still executing? (I think this applicable on, lets say an elevator where in I press the floors and then it would queue up the other actions first). Or is there a different approach (not behavior based) for this type of application?

    Kind Regards,
    Wendell

    Wendell · Jun 6, 08:44 PM · #

  6. Wendell,

    I’m glad you like the video.

    I’m not entirely sure I understand your first question, but perhaps this will answer it:

    You can write behaviour actions in two ways:

    1. The behaviour is short and happens repeatedly. [This is how most of the behaviours in the wall-follower work. For example, as long as the robot finds that it is too close to the wall (and no higher-priority behaviour is in effect), it will run a short little command to steer away from the wall. It will likely go through this cycle dozens of times before it is done and away from the wall.]

    2. The behaviour is long, and, when finished, the initial triggering condition is gone. [This is how the ‘Escape’ behaviour works. When the robot hits the wall, it triggers a series of events that last for a few seconds — it backs up and turns, and, when done, it is no longer hitting the wall. The behaviour action only happens once for every time the bumper is hit.]

    One way you can queue up behaviours is to use a latch — or something that retains its state. Let us say that, when a button is pressed, you want the robot to do an action based on that whenever it is done doing things that are a higher priority. Instead of using a trigger that says (and, forgive me, NXT-G does not express itself well in text), “check if the button is pressed and store the value in button_action” use a trigger that says, “if the button is pressed, set button_action to true”. The first can only execute while the button is being depressed. The second will remember the button press until the corresponding action occurs — but you must ensure that when the behaviour action occurs, it sets button_action back to false.

    I tried this technique and found that sometimes it will execute your action when it is no longer desirable to do so. (I think I did it for the “close to the wall” trigger/action. Whenever the robot went to escape, it would become close to the wall. The value would become latched. After the escape behaviour was done, even if it was far away from the wall now, it remembered that it had been close and initiated the behaviour to steer away from the wall.)

    Clinton Blackmore · Jun 7, 07:59 AM · #

  7. Hello Clinton,

    Thanks for the recommendation you have given! I have applied them however I ended up in making a variable to detect if there is an action still performing and try not to trigger another action until after the arbiter is free. In my example, I have inputs 1 and 2 as buttons and B and C as motors. When 1 is bumped then it will run motor B slowly for 5 seconds, and same for input 2 that will run motor C for 5 seconds also. What happened was when input 1 was bumped then motor B will run but while it is doing so, even if input 2 is bumped motor C will not run. until after motor B will motor C run but it will require that input 2 is bumped again. I think I have made the arbiter to execute exclusively only 1 action and ignore all others. My is that did this defeat behavior based programming and in turn became a different concept altogether?

    Regards,
    Wendell

    — Wendell · Jun 8, 12:20 AM · #

  8. If I’m understanding you right, you want motor B to go when button 1 is pressed, and motor C to go when button 2 is pressed (regardless of what motor B is doing).

    If that is all the program does, you can have two parallel sequences beams, and each beam looks something like this:

    “loop { wait until button is pressed ; run the motor }”

    If you do want a behaviour-based approach, consider using an arbiter for each independent output of the system. The robot in my video uses one arbiter to control one output — the whole robot. It is possible to use either one more complicated arbiter, or a series of simpler arbiters, that control outputs for different systems. One could control the output for Motor B. Another could control what happens to Motor C. A third could control what is displayed on screen, and a fourth could control what sounds are emitted. All can make use of the same triggers, but act on them differently.

    Behaviour-based control is not always the appropriate answer. It works best when you want to add complexity by taking a base behaviour and adding additional behaviours to it.

    Clinton Blackmore · Jun 8, 06:40 AM · #

  9. Now I get it! I can do with multiple arbiters too! Thanks for the amazing information you have provided Clinton, it would be a lot of help when I would be doing my projects. I see that the behavior based control is applicable on some stuff only but it would be an easier way to tackle on a robot that deals with a lot of sensors.

    — Wendell · Jun 9, 01:17 AM · #

  10. Glad to hear you’ve got the vision. Good luck, Wendell.

    Clinton · Jun 9, 06:49 AM · #

  11. Hi Clinton,
    I just wanted to thank you for your generosity in taking the time to make the video. I teach a Robotics class to 8th-graders, and have been using the ideas you present for awhile now. Your explanations are excellent, and have helped me teach more effectively. Thanks!

    Sincerely,
    Brian

    — Brian Kelly · Feb 25, 10:52 AM · #

  12. Thank you, Brian, for taking the time to write a comment. I always like to hear that others have found my videos helpful!

    Clinton Blackmore · Feb 25, 05:09 PM · #

  13. Hi Clinton,
    I am not sure you have got my first text, that’s why i’m writing again.
    First of all, very helpful video and code. Thank you so much. I have a project, maze solver robot, which has to find in the maze black point, stop and drop a ball for my eng. class.
    So, my question is, do I have to add another parallel behaviour, and with highest priority since is the last one. And I have dificulties adjust the 90 degrees curves, and then another 90 deg. which means 180 deg. around the wall.
    Please help me if you can.
    Thank you very much, and I’m sorry about my english, I hope I was clear enough.
    Jovan, Macedonia

    — Jovan · Nov 8, 05:46 PM · #

  14. Hi, Clinton
    I really need help, to make a robot that can solve a maze. I can use two touch sensors, one ultrasonic, and one color, and three motors. Robot has to find a black spot in the maze, and drop a ball.
    The idea with wall hugger is very good, but I don’t know the layout of the maze, so there is a situation when it goes in circle around the walls.
    Please if you can help me I will appritiate so much, or someone else who read this comments.
    Thanks

    — Jovan · Nov 15, 07:51 PM · #

  15. Jovan,

    I’m thinking along these lines:

    The robot has some sort of sensor to know if a wall is in front of it, and if there is a wall to the right of it. It also has a sensor to detect if it is at the place where it needs to stop.

    The robot navigates the maze using the right-hand rule. Imagine you are a person walking through a maze. Hold your right hand against the wall, and walk forward. When you come to a bend and turn right, do so. If you come to a wall in front of you, turn left 90 degrees. This method will solve many mazes (but not all — you can still run in loops. A more advanced solution is beyond the scope of this comment!)

    I would use the following behaviours, in order of least important to most important: * drive forward * turn right around a corner * spin left when blocked * stop at target

    The ‘drive forward’ behavior is the default, and is active is nothing is more important. The robot drives straight forward.

    The ‘turn right around a corner’ behaviour is triggered when you detect that there is no wall to the right of the robot. This behaviour lasts long enough to get the robot around the corner. It may be easier said than done, but I’d drive forward, turn 90 degrees to the right, and drive forward one wall length.

    The ‘spin left when blocked’ is triggered when there is a wall in front of the robot. The behaviour is to turn left 90 degrees. (Note that at a dead end, the behaviour will trigger twice in a row).

    The ‘stop at target’ behaviour will be triggered when the target in the maze is detected. It will cause the robot to stop, and maybe dance a jig or even power down — who knows. This is the highest priority behaviour.

    If you need to solve a more complicate maze, where the right-hand rule doesn’t work, you’ll need to build an internal map of the maze and remember where you had another choice to take. I’m sure there is information on how to do it on the internet — but good luck.

    I hope this helps.

    Clinton

    — Clinton Blackmore · Nov 17, 09:37 PM · #

  16. Thank you very much Clinton, that hellped me so much. I really appritiated.
    Sincerely Jovan, Macedonia

    — Jovan · Nov 29, 04:01 PM · #