Friday, January 29, 2010

Clips

Real robot's clips:



Simulation's clips:

UvA:


B-Human:

Tuesday, January 26, 2010

23-25 Jan

We were able to succesfully run the simulator and able to give command differennt commands for various behaviour to Nao in the simulation. e.g. to find, kick, go to the ball.

xo is the command for invoking various behaviours to nao

xo ? - list all the available behaviours referred as options

these behaviours/options are written in xabsl(The Extensible Agent Behavior Specification Language) in the form of decision tree.

-------

for using our experimental jointdata, we initially tried replacing the data in kickLeftNao.mof file but there were no change in simulation

after trying we found that we have to replace the jointdata from kickNaosimulator.mof file in proper order and unit.

mof files are stored at - ...\Src\Modules\MotionControl\mof\
xo xabls files are stored at - ...\Src\Modules\BehaviorControl\BH2009BehaviorControl\Options\

our data is not working properly in simulator as Nao is falling while kicking in the simulation.

Tuesday, January 19, 2010

19th Jan

Studying Choregraphe, I tried to figure out a balanced state for kicking motion. First I designed a sequence of gestures and then Ravi and I tried to modify it to achieve a faster and stronger kick. In this manner, I worked with Choregraphe 's time-line method to design a general motions and then export the motion's code to a Python code by which Ravi tried to make some changes in detail values to fix faults and to obtain a better performance. Furthermore, we tried to use the B-Human's cfg values to implement their motion to get insight how their program performs kicking, and consequently to apply their motion's strength in ours. However, I could not work with B-Human's hardcoded values because most of them were not valid in the Choregraphe. I think there can be two possibility: first the order of them is not as the same as what explained in the report. Second values maybe valid for the old version (.13) of Choregraphe because according to guys who work on the Goal perception, the German code is just compatible with the .13 version of Choregraphe and Nao.

Besides, we also worked on the German code. In this case, Ravi tried to run the code on the B-human's simulator on the computer of the lab, and I tried to solve the compiling problem on my computer. Yesterday, thank to Arnoud's help, the Cygwin was run on my computer successfully, but there is still a problem with compiling "libbhuman" and "_Nao" packages. The error message is:

1>------ Building libbhuman (Release) ------
1>bhuman.cpp
1>make: *** [../Build/libbhuman/Linux/Release/bhuman.o] Error 127
1>Project : error PRJ0019: A tool returned an error code from "Performing Makefile project actions"
1>Build log was saved at "file://d:\E-Books\UVA\AIProject\UvaProj\Build\libbhuman\Linux\Release\BuildLog.htm"
1>libbhuman - 1 error(s), 0 warning(s)

I search a lot on the Internet and also compare my Path and other parameters of the lab's computer with mine to find any thing missing in my compiling process, but there was nothing.

Friday, January 15, 2010

15 Jan

Reading the tutorial, made a behaviour box from scratch with coding in Python(other available option is Urbi)
It is named HelloWorld. First program.

Using various method from ALMotion Module's API, was able to move Nao forward then sidewards and then raise the hand and rotate the wrist and open hand to make a V shape from finger, tested both in Choregraphe and Nao.

The method used are:

  • closeHand
  • openHand
  • gotoAngles
  • addWalkStraight
  • walkSideways
  • walk


Walking methodology: For making Nao move, first a walk pattern has to be created using various available methods(e.g. addWalkStaright, walkSideways are few of them) then Nao is asked for walk using walk method.


For the above tasks, some other methods were also used which involved different coding style. These didn't worked out. Definition is clear but could not understand why these didn't worked properly(May be need to explore more).

  • changeAngle
  • getAngle

Note: Also graphically Choregraphe takes angle as degree but in codes it take angles as radians.

Next thing tried is to implement the kicking using the B-Human data. After coding in the simulation in Choregraphe local Nao was able to move leg at different position and able to lift left leg, but when tried it in real Nao it was not balanced it was falling everytime.

Together with above methods a new method was used, which is
  • setPosition
May be the error is in interpretation of B-Human data!!!

Bardia's log book

Primarily, I did some research about what Nao is, how it works, and what the standard platform field of the robocup competition is. In this manner, the Standard Platform League is a RoboCup robot soccer league, in which the robots operate completely autonomously. In addition, the ball used is an orange no bounce hockey balls which is 65mm in diameter and it weighs 55g. Besides, Nao is an entirely programmable 60cm high robot. The robot's platform is linux, but it can be programmed by different programming languages on different platforms (Windows, MacOS, etc.) via cross-platform tools such as Choregraphe, Gostai, and even Microsoft Robotics Developer Studio.

Choregraphe includes all graphic interfaces, behavior libraries needed to create movements. It accepts Urbi and Python language, so it can directly call C++ modules developed separately. Gostai is developing Urbi, and also fully interfaced with C++, Java and Matlab.

Moreover, I read the report of B-Human team. So, I found out that b-human's code also included a simulator by which the sensors' data and behaviors of the robot could be examined. Therefore, we decided to read the b-human team's report plus to try to understand the applied algorithms and to compile the code. In this case, I tried to compile the code on my computer both in Windows Vista and Linux Kubuntu.

First, I tried to install the needed packages according to the report. However, in some cases, the version said in the paper were not available, and I had to install the new versions of them. After that, there were a lot of errors in compiling. The primary errors were because of the fact that a number of packages like g++ had not been installed properly. In this matter, I used the command below to solve the problem:

sudo aptitude install build-essential

However, there were still a number of errors that I could not solve. Therefore, I switched to Windows and tried to follow the report's steps to install the required packages. I installed Microsoft Visual studio, and downloaded "bhuman-cygwin.tar.bz2" and "opennao-academics-1.3.13-nao-geode.rar" packages from the aldebaran and b-human's websites. I also tried to install Cygwin version 1.5, but its mirror servers installed it in the way of the newer version. I mean that its "opt" folder consisted of a folder named "gcc-tool" instead of "crosstool"; according to some forums, this difference in the construction was due to the mirror server from which the cygwin had been installed. Moreover, their contents were also different from each other. So, I just copied the packages from the "bhuman-cygwin.tar.bz2" file to the mentioned locations. Finally, I added the path of the Cygwin folder to the environment variable. Thereafter, I could compile the code and run the simulation, but two essential packages "libbhuman" "_Nao" were not compiled. The problem was reported as below for example:

1>------ Rebuild All started: Project: libbhuman, Configuration: Release Win32 ------

1>Performing Makefile project actions

1>make: /bin/sh: Command not found

1>make: *** [clean] Error 127

1>Project : error PRJ0019: A tool returned an error code from "Performing Makefile project actions"

1>Build log was saved at "file://d:\E-Books\UVA\AIProject\UvaProj\Build\libbhuman\Linux\Release\BuildLog.htm"

1>libbhuman - 1 error(s), 0 warning(s)

========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========

 

I tried to figure out any solution on the Internet; all of the suggestions were about fixing the PATH variable, like:

export PATH=.:/home/yap/bin:`printenv PATH`

setenv PATH=.:/home/yap/bin:/bin:/usr/local/bin

However, none of them worked with my cygwin. In detail, the command "setenv" was not known for the cygwin, and the first line, although, seemed to work, it was not effective because by restarting the console, the PATH was also reset to the former addresses. Thus, modifying the PATH did not help me solving the problem.

Since dealing with the German code plus overcoming the compiling issue was very time consuming and fell us behind the schedule, we switched to the Choregraphe environment. In fact, because special actions in the code are hardcoded, it is much easier and faster to work with Choregraphe to figure out motions like kicking.

After installing the Choregraphe, we faced a problem. Some motions like walking or turning around did not work on the robot; the error alerted pointed to the lack of a package called "almotion". Finally, we figured out that this problem was because the robot's software version (bleu , 13.8) was older than our simulators' versions (13.13 and 13.17). Therefore, by connecting to the new robot (Rouge), we could run all the available motions on the robot.


14 Jan

choregraphe was able to do some movement in Nao but no leg movements which belonged to ALMotion module,

error: Error in Python module ............... ALMotion is not defined

problem was due to: NaoQi wa 1.3.8 and choregraphe used was 1.3.13 so not compatible with each other.

when Nao with Nao 1.3.17 and choregraphe 1.3.17, everything worked fine
-----------
Green documentation disccues various things about choregraphe with tutorials about making behaviour boxes from scratch as well.

Blue documenntation discusses about API modules

using python was able to make a simple box for moving right ankel of nao
method used for this is: ALMotion.changeAngle("RAnkleRoll", 1)

--------------
different joint values mentioned in B-Human mof file seems does not contain all joint data, not clear still.

Useful Things

to check from visual studio command prompt that Nao is alive or not, use this command:
ping ipaddress of Nao

Nao's OS is a linux OS so it can be used like that.
to remotely run any programme or module etc, one can connect to Nao remotely using SSH like Secure Shell Client and run commands or transfer files etc.

commands for running Nao:

./status checks whether Nao and Bhuman are running or not and outputs the result.
./naoqid start|stop|restart this command starts, stops and restart NaoQi respectively.

in Nao when NaoQi starts it automatically runs some module, these modules are mentioned in autoload.ini startup file at location /opt/naoqi/modules/lib/autoload.ini
and so one need to add module's name here which is needed, then these will be automatically started with NaoQi

If any changes is made in autoload.ini then NaoQi has to be restarted to make new modules running.

all these above task can be done remotely by connecting with Nao using SSH.

Wednesday, January 13, 2010

12 Jan

Choregraphe: Which is a software to control/test/modify Nao's movement and behaviour.

Was able to connect to Nao using Choregraphe, initially connection error if fixed port was used but got connected smoothly with default port 9559

Just after connection there was an error: "Some modules are missing, Choregraphe will not fully work - ALMotion module is missing or not compatible with current version of Choregraphe. The robot will not move... "

Due to the above error the connected Nao could not be enslaved(RED), either it was in non-enslaved state(GREEN) or intermediate state(YELLOW).

The above error was due to absence of motion module in the startup(autoload.ini file) of Naoqi

autoload.ini is located at: /opt/naoqi/modules/lib/autoload.ini

python module was also missing in autoload.ini which gave error while playing any script, hence no movement

It is now clear that for each module, one is going to use it has to be started in the Naoqi.

after adding the python module and motion it was ready but still no movement in Nao.

There was a conflict between B-HUMAN and motion module as both of them were trying to move the bot.

B-human is stopped, now its working fine. Able to move Nao.


It is not clear why initially B-Human was not able to move Nao.


Monday, January 11, 2010

11 Jan

Report and Code Understanding of B-Human Framework:
Main idea in the starting is to understand how the ball perception is implemented and what are the algorithms used in its implementation. The whole procedure for perception of various objects or targets(goal, line, field, ball etc) involves image processing which is divided in 3 steps and then the final(3rd) step branches out for each target. The 3 steps of image processing are following:
  1. Segmentation and region building
  2. Region classification
  3. Feature extraction
Main focus here is 3rd step concerning with BALL PERCEPTION, but a brief introduction for other steps are given.

  1. Segmentation and region building: This step basically uses the camera image(to be read and discuss) and creates region after doing segmentation of the image. Regionizer is the method/file used for doing this step. And configuration of various parameters for various criteria or thresholding can be done in regionizer.cfg. This method gives RegionPercept which contains the segments, regions, borders etc. This is used in the next step.
  2. Region Classification: This step is carried by RegionAnalyzer method/class. This itereates over all the regions in RegionPercept and classifies the region as a line of ball and discards everything else. This returns LineSpots and BallSpots which contains possible candidate for line related targets(field line, goal) and ball target respectively.
  3. Feature Extraction: This steps involves detecting the field line, the goal and ball. The main focus related to project work is on detecting the ball. This is discussed below.
DETECTING THE BALL: This task is handled by BallPerceptor class. This uses the BallSpots found in 2nd step. It itereates over each BallSpot in BallSpots and give likelihood value called validity(in the report) between 0 and 1 and finally selects the most likely BallSpot as the ball. Each BallSpot goes through some stage and get some validity. Then BallSpot with validity below some threshold value are discarded i.e. these can't be ball. And if validity are above some other threshold value then these can be considered as ball. All other between these threshold have to go through other stage. (Here, I need to find out which configuration file is used for setting the threshold value.)

A method called scanForBallPoints in BallPerceptor class takes each BallSpot from BallSpots and find the ball points(points in the ball, stored in vector form) in it by scanning the image in eight directon (left, right, up, down, and the four diagonal directions) from the BallSpot following some boundary condition. The ball points are handled by BallPointList class defined in the same file.

Now another method named computeBallInImageLevenbergMarquardt runs over each BallSpot, it uses its ball points and finds the center and radius of that particular BallSpot. This method handles two cases for ball

case 1: when ball is fully visible : radius = maximum distance between two different ball points;and center = average of those two ball points(as the ball point is stored in vector their avg. will give the center.)

case 2: when the ball is partially visible maybe due to occlusion or position at border of image - initially it comes to mind that if one knows 3 boundary point of the circle then one can easily find the center and radius of the cirlce but it was not that easy. In the report they are using Levenberg-Marquart method(Least Square fitting). The given reference is The NUbot 2003 Team Report. Section 3.6 in the above given reference report explains about the use of least square circle fitting and demerits of using 3 point circle fitting very nicely.

And both of these cases are implemented in the method
computeBallInImageLevenbergMarquardt

Now using the center of the ball and its location relative to the horizon, location of the ball is calculated in world coordinate relative to the robot (this method has to be found out). Now on this location the ball is projected back and validity of the BallSpot is calculated.(this method is also to be found out)

calculateValidityFirstRun first handles that each BallSpot has a validity and all those having value below certain threshold are discarded and above a different threshold can be thought as a ball. BallSpots in between these thresholds go through analysis based on color classes. This is done by covering the BallSpot by three different size of square box and then calculated the values of percentage of orange color lie in each region and compared with set threshold for each regions. The three square are following:
  1. 1st square is the biggest square that fits the calculated ball of BallSpot.
  2. 2nd square is the smallest sq. bigger than the complete ball without all the points inside the first region.
  3. 3rd square has an edge = square root(2)*diameter of the circle with center same as circle.
Threshold for each sq.
  1. 1st region = 90% of orange points
  2. 2nd region is ignored
  3. 3rd region = 2% of orange points
Using this the likelihood or validity of the BallSpots are calculated handled by calculateValiditySecondRun

All these BallSpots with their validities are stored in ExtendedBallPercepts and when all the BallSpots are analysed then the one with highest value of validity is stored in BallPercept as ball.


*Most of the text are directly from the report of B-Human project.*


Followers

Contributors