Version History
Current stable release: 4.0.1
Under development: 5.0 (CVS)
Backburner
See something you'd like to work on? Let us know, and we can coordinate development.
Here's the current wishlist:
- Automatic initalization of color segmenting thresholds
- Walking:
- Upright walk (instead of on knees)
- Smoother walk (better for vision)
- A new walk engine using dynamic step generation (so
we can say move forward
5 steps, or so it can handle rough (or at least "not level") terrain
- sound playback speed/pitch control, volume controls
- A vision module for automatically adjusting shutter
speed and gain
- Color correction for blue corners of ERS-7 camera
images
- Also see the automatically generated todo list and Bugzilla backburner
items.
Changelog
|
|
|
- Minor maintenance update for building under 64 bit architectures and gcc 4.3
|
|
- New Features:
- Hardware Abstraction Layer, allows control over a variety of hardware devices and moves Tekkotsu beyond the Aibo platform. 'sim' executable is renamed as 'tekkotsu'.
- Initial robot models: Qwerk-based, LynxMotion 6-DOF arm, custom "Regis" prototype, beta support for Dynamixel/Bioloid kits and the iRobot Create.
- Hardware components: webcams (local or streaming), SSC-32 RC servo controller, TeRK (Telepresence Robotics Kit) interface (for Qwerk-based robots), Dynamixel servos (for Bioloid kits)
- Offboard communication via network, serial port, or third-party executable.
- Can use either single-process (multi-threaded) execution model, or simulator's original multi-process (forked) execution model. The HAL 'Multiprocess' setting defaults to 'false' to select the threaded model for better performance and ease of debugging, but can be set to 'true' to revert to fork() for more accurate simulation of the Aibo's environment.
- Thanks to Benson Tsai for TeRK support and ongoing Create development, Glenn Nickens for Regis prototype and kinematics, Zhengheng Gho for additional kinematics work, Harald for 'wacaw' OS X webcam sample code
- Capabilities class provides more robust and flexible mapping for porting behaviors between different robot models.
- Aibo Telepathy project provides inter-robot event subscription (thanks Brett Simmers and Lisa Storey)
- Particle Filter for state estimation, particularly useful for robot localization and mapping.
- Imported 'zignor' library from Jurgen A. Doornik, provides normally-distributed random number generation. (needed for Particle Filter)
- Expanded motion calibration ability, now includes offset (calibration_offset) as well as scale (calibration_scale) parameters (thanks Kyle Comer, kcomer AT andrew cmu edu)
- Configuration editor provides interactive modification of the framework parameters (in Controller, File Access -> Tekkotsu Configuration)
- netstream provides iostream-based network communication as an alternative to Wireless and Socket.
- Doesn't support non-blocking communication (yet), so you will want to spawn your own thread to handle data flow.
- Not Aibo compatible (cannot spawn threads, cannot handle blocking communication, would need to rewrite system API usage)
- Build system supports building shared libraries for faster linking.
- API Changes:
- Output names no longer include '~' padding to make them constant length. Framework code which loads such parameters from storage (i.e. Config, PostureEngine, MotionSequenceEngine) has been modified to strip trailing '~' characters from output names for backward compatability.
- Using degrees in posture and motion sequence files is deprecated.
- PostureEngine no longer requires weights to be provided — each output list is assumed to be weight=1 unless otherwise specified.
- General cleanup and simplification of MapBuilder and Lookout. There is only one request type now, and one very simple constructor. All other parameters are set by assigning to members of the instance.
- Replaced old custom configuration format with plist-based data structures. This should be largely transparent, and provides groundwork for the configuration editor. It also allows components to be notified when a user modifies their parameters.
- Behaviors are encouraged to inherit from plist::Dictionary to manage their state. This provides convenient serialization of behavior status, and future releases may include a "Behavior Editor" akin to the Configuration Editor which will allow users more convenient interaction with their behaviors.
- DualCoding sketches can support image resolutions beyond 64K pixels.
- Condensed jpeg/png support into image_util namespace.
- Removed all 'using namespace' statements from framework header files, user code may need to explicitly import std or DualCoding itself. [1] [2]
- Head remote control (HeadPointerGUI) no longer actively maintains position, so if you leave it running it won't interfere with other behaviors.
- Bug Fixes:
- Fixed 'extra qualification' error generated by gcc 4.1 (thanks daniel.hladek AT tuke sk) »
- Should work for 64-bit architectures now.
- Fixed some small issues for Mac OS X 10.5 Leopard.
- HAL/Simulator now pretends to play sounds and generate appropriate events (actual sound playback still not implemented)
- Improved compatibility with Subversion version control system (thanks Daniel Casner) »
- Documenation errors fixed by Daniel Höh
|
|
- New Features: »
- DualCoding robot vision programming facility
- Toolkit for constructing visual parsing algorithms, intended for deliberate, in-depth visual processing of a scene
- Based on symbolic and iconic "duals", allows easy extraction/rendering of shapes from/to raster images -- some algorithms are better expressed in one encoding or the other
- Maintains three "spaces": camera, local (egocentric), and world (allocentric). Methods are provided for mapping objects between spaces (i.e. project from camera space to ground plane, then register local area against world map via localization)
- ControllerGUI has buttons for launching the SketchGUI for each of (C)amera, (L)ocal, and (W)orld space, to view the parse tree of processing done in each space.
- For more info, see the Tekkotsu tutorial chapter on dual-coding representations, and the visual routines lecture notes from the CMU Cognitive Robotics course, weeks 3-4.
- Simulator now provides sensor loading (»), PNG loading, and motion feedback (»).
- Now supports precise sensor/image input timing and synchronization by using index file generated by VisionGUI, i.e. tab-separated file with filename to load in first column, and time at which to load it in the last column, see help on simulator configuration settings Vision.Source and Sensors.Source »
- Added new 'freeze' and 'step' commands, see Simulator usage page.
- Simulator is now functionally complete -- can run any behavior you would otherwise run on the robot
- Imported PitchDetector code from Jonah Sherman and Matus Telgarsky's final project in CMU's Cognitive Robotics course »
- EventRouter now guarantees that all listeners for an event will receive that event before the next event is posted »
- In other words, if a new event is posted during processing of another event, the posting of the original event is resumed and completed before the next one is started.
- Does not change contract -- you can still assume all processing on a posted event is completed by the time postEvent() returns; the new feature is that postEvent() will resume the previous post before starting on the current one.
- This fixes the issue of LogEvent and other Listeners receiving events in "reverse" order
- StackTrace interface -- generate stack traces during runtime like what is available in Java
- Spun off as its own sourceforge project
- Doesn't quite have debugging symbol access, so for now only provides hexadecimal addresses to pass to command line tools addr2line or new Tekkotsu tool 'trace_lookup'
- addr2line gives function and line number under Linux, 'trace_lookup' looks up address in Aperios binary (must be run from within project directory in order to access binaries)
- Aibo3D updated for ERS-7 (requires Java3D) -- highly recommended to visualize status of robot when running in the simulator
- Added PNGGenerator to vision pipeline, CameraBehavior now saves PNGs to memory stick instead of our made-up "RAW" format. VisionGUI image streaming doesn't support PNG yet though, compression choices still either JPEG or "none".
- ControllerGUI provides default scripts for switching between high quality vs. high framerate video if no preference file is detected (i.e. first run, or if you delete your java preferences) »
- Makefile build system supports parallel builds
- Significant reduction in build time on dual core or hyperthreading-enabled processors
- To use, 'make -jN', where N is the number of processors on your system
- LoadSave adds encode()/decode() and encodeInc()/decodeInc() (thanks Daniel Höh) »
- VisionGUI can now record sensor frames alongside images -- look for "Save joint positions as well" checkbox in the save dialog
- User code can send commands to the simulator via Simulator::sendCommand("foo")
- Simulator provides dx/dy/dxdy image channels and provides better quality image scaling (both replicate more closely what's provided by the Aibo hardware) »
- Added Makefile target 'ftpupdate', use IPADDRESS=<ipaddr> to specify network address. (thanks Michael Voigt) »
- Entering text in the "Send input:" box of ControllerGUI with a BehaviorSwitchControl selected will send the text as a "private" TextMsgEvent to the behavior controlled by that menu entry. »
- Quick reference sheets listed on documentation page
- API Changes:
- Standardized capitalization style of LoadSave (») and SoundManager (»)
- LoadFile() -> loadFile, LoadBuffer() -> loadBuffer(), etc.
- EventRouter::postEvent() now prefers pass-by-reference instead of pass-by-pointer. (Pass-by-pointer version is deprecated.) »
- Since we guarantee all listeners receive the event before postEvent() returns, no need to allocate events on the heap, allocate on the stack instead for better efficiency.
- Just need to drop the 'new' on most calls: erouter->postEvent(new TextMsgEvent("foo")) becomes erouter->postEvent(TextMsgEvent("foo"))
- Timer cross-talk now enabled, with new dedicated TimerEvent class »
- 2.4 already deprecated the use of addListener(timerEGID) for creating new timers to make room for this feature.
- As before, use erouter->addTimer(...) to request timers, now only use addListener(timerEGID) to subscribe to other listeners' timers
- You don't need to call addListener(timerEGID) unless you specifically want to listen to externally-requested timers, addTimer() knows to send the TimerEvent to its creator without an additional call to addListener(timerEGID)
- U and V color channels have been swapped -- we have been reversing the usage of these channels throughout the framework since its inception due to mis-mapping the 'Cr' and 'Cb' channels supplied by the system. »
- There may be a historical basis regarding maintaining compatability with CMPack's threshold (.tm) files (?)
- To maintain compatability with existing .tm files, the file loading code will swap channels in files with the YUV8 header. New files created with EasyTrain will have a YUV'8 header, and will load directly without swapping.
- Profilers moved out of WorldState into dedicated shared memory regions
- WorldState sensor processing moved from Main to Motion process
- Don't want Motion to become sensor starved if Main locks up on some long-term computation.
- Implementation introduces WorldStatePool, where Motion can update one state instance, while Main is reading a different instance, and then when Main completes processing it switches to read from the newest sensor values.
- Console port (10001) is now defaulting to Controller-always mode. »
- This removes the change in interpretation which would occur if ControllerGUI was connected, where input was interpreted as text messages instead of Controller commands.
- A new configuration item main_config::consoleMode controls the interpretation, one of CONTROLLER, TEXTMSG, or AUTO, the last providing backward compatability.
- WorldStateSerializer protocol changed to send robot's model name and frame number before each packet.
- AdvanceOnAccess simulator setting replaced by usage of 'step' command when paused
- EasyTrain's color selection "spectrum" files have changed file extension from .spec to .spc so they can be saved directly to ms directory without causing filesystem issues (8.3 name requirement)
- Providing new default threshold file, generated by doing segmentation in EasyTrain in three different color spaces (hsb, xy, yuv), and then using the new Vote tool (tools/seg/Vote) to combine their results. The color selections for each color space are provided in project/ms/config/easytrn.
- vision.gain now defaults to mid instead of high. Rational is that segmentation generally handles darker shadows better than washed out specularity. May want to increase gain if you have dim lighting, or trade off for faster shutter (vision.shutter_speed).
- Moved some behaviors from "Demos" to new "Services" directory to indicate they are intended for useful functioning of the framework, not just sample code. »
- Bug Fixes: »
- Fixed possible crash triggered by toggling estop via back button double-tap »
- Actually a bigger issue involving process identity confusion causing invalid mutex acquisition
- Fixed bug with vision calibration in config->vision.computeRay() »
- SoundManager was not generating audioEGID deactivate events »
- Behaviors can now auto-stop (call their own DoStop()) anytime without causing a crash »
- Fixed initialization issue in RegionGenerator causing garbage data if access directly without adding an EventListener »
- Rightmost column of image was not being segmented correctly »
- Fluorescent white balance setting was misspelled in tekkotsu.cfg (and Config.cc parser) »
- MotionManager wasn't blending partial-weight motions correctly, should get remaining weight from lower-priority motion(s). »
- Fixed ftpinstall/ftpupdate scripts (thanks Daniel Höh) »
- Environment.conf's FILENAME_CASE setting should be working again (a few parts of the Makefile were assuming lower case) »
- Default values of TEKKOTSU_DEBUG and TEKKOTSU_OPTIMIZE now based on target platform; defaults still enable debugging for the simulator, but optimizations instead on the Aibo, yielding a significant performance boost over what was produced by default settings in 2.4, where optimization was turned off for both.
- Known Bugs: »
- The simulator is incompatable with Cygwin, (apparently) due to a bug in the cygserver background process which handles inter-process communication, causing it to hang. Details are unknown because the primary debugger (GDB) is also not fully functional in Cygwin, which hampers efforts to determine the cause of the problem (and also the usefulness of the simulator even if it was working). »
- ControllerGUI occasionally blanks window on connect »
- Minor memory leak in HeadPointerMC creation/destruction »
|
|
- Bug Fixes:
- Plugged significant memory leak which occurred when Raw Cam is open with JPEG compression. »
- Fixed some build issues with test code (tools/test/*)
- New Features:
- All entry points of user code execution are now wrapped by try/catch blocks. »
- If any execeptions occur, they will be displayed and processing will continue, instead of a call to abort()
- The exception handling is done through a call to ProjectInterface::uncaughtException(), which is a function pointer you can reassign to a function of your own design. (if you prefer an abort(), just reassign a function which either returns false or calls abort() directly)
|
|
- New Features:
- Simulator
- A pretty highly requested feature - would be very useful for classes with few Aibos and many students.
- So far, provides full-fledged functionality for networking, vision, and tool development.
- Not done yet! This release provides a lot of groundwork, additional features such as motion, sensors, and sound, as well as teleoperation and remote execution will be in later releases.
- A project file for building with XCode is provided.
- Imported new libraries: libpng, libxml2, zlib
- These libraries are precompiled for Aperios and found in the aperios directory.
- When building the simulator, the host machine's own copy of these libraries will be used.
- Geometric and Radiometric calibration of the ERS-7's camera was done.
- Only the geometric calibration actually made it into released code.
- A lot of the calibration was done using MATLAB, so future recalibrations will still need to be done manually, but at least the code to apply geometric calibrations is now in place.
- New EasyTrain (tools/easytrain) color segmentation threshold tool -- thanks Eric Durback and Matt Carson
- Very similar to previous "seg" tool, but on steroids
- Now you can select regions of the image to see the corresponding areas of color space
- Selected areas are now saved as well as the thresholds themselves so you can re-edit your thresholds
- A variety of bug fixes
- Graphics package for basic camera image markup, try the demos under "Vision Pipeline" menu:
- "Draw Object Boundaries" - Image overlay of VisionObjectEvent bounding boxes over RawCam/SegCam images »
- "Draw Skeleton" - overlays kinematic framework over camera feed (can only really see front legs)
- "Stare At Paw" (under "Mode Switch" -> "Kinematics Demos") now draws a box over where the tip of the toe should be.
- The results should be visible in the Raw Cam, but the object boundaries can also be seen in Seg Cam.
- Region Viewer -- thanks Harm Aarts and Niels van Hoorn »
- WaypointEngine has much improved diagrams and documentation »
- FlashIPAddrBehavior speaks (and flashes) IP address for use in unknown or dynamically assigned networks »
- VisionObjectEvent now includes a camera frame number field -- thanks Ignacio Herrero Reder »
- Patch for throttles and a new !select variation -- thanks Douglas Blank »
- Check for filename problems in ms directory before copying to memory stick »
- EventGeneratorBase can filter its own source on type ID »
- Camera parameter information stored in RobotInfo and Config »
- Add area field to VisionObjectEvent »
- srand() configuration item to enable or disable automatic seeding of pseudo-random number generator »
- API Changes:
- Vision Pipeline stages now each throw 3 events per image:
- "activate" event - indicates new frame is available for read-only access
- "status" event - indicates initial processing on frame has completed, behaviors can now draw into the frame (see Graphics, outlined above)
- "deactivate" event - indicates completion of drawing, interested behaviors can now access completed markup of the frame (e.g. Raw Cam listens for the "deactivate" stage so that it can send the marked up images to the GUI)
- A lot of stuff got reorganized when adding support for the simulator. Namely, several files were moved from Shared to a new directory named IPC (Inter Process Communication). MMCombo, SoundPlay, and TinyFTPD were moved into a new directory called aperios.
- libjpeg is now built from original source and included as a precompiled library within aperios instead of having its source files integrated with the framework. When building the simulator, the host machine's own copy of libjpeg will be used.
- VisionObjectEvent cleanup »
- BehaviorBase::DoStop() now automatically calls erouter->removeListener(), so there's one less thing that could go wrong (but if you like, feel free to still call it yourself as well ;)
- The ControllerGUI launch script now automatically forks to the background so you can get your command prompt back.
- Bug Fixes:
- Accessing vision pipeline stages "directly" can return obsolete or invalid data (possibly causing a crash) »
- MotionManager won't handle adding same SharedObject again, while still already active »
- error in behavior.h template for operator= »
- EventTranslator has occasional bad event »
- Transition names are not automatically generated properly »
- WalkMC::setTargetDisplacement doesn't throw event on completion »
- WalkMC::setTargetDisplacement moves half the requested amount »
- vision train won't save files outside of tools/seg directory -- thanks Ken Dwyer »
- WaypointWalk load/save files broken »
- Streaming video connection dropped during long computation (UDP timeout) »
- VisionObjectEvent returns width instead of height -- thanks Benjamin Wu »
- Posture Editor leaves PostureMC running »
- Socket printf doesn't check for buffer boundaries »
- MotionSequenceEngine::curstamps uninitialized (causes trouble if MS added as "paused") »
- RegionGenerator::getBinSize unimplemented »
- Java "Listeners" spawn thread before construction complete »
- GUI connection problems with Java 1.5+ »
- Vision network streams should send notification packet when Aibo shuts down so GUI can go into reconnect mode without depending on a timeout »
- Missing deactivate events for visObjEGID -- thanks Ignacio Herrero Reder »
- ControllerGUI estop button icon status vs. action »
- fix the stupid 'offsetof' warning in Profiler »
- missing jpeglib from convertmot Makefile -- thanks Nuno Lopes »
- Known Bugs:
- Memory usage is up 6.2MB: precompiled memory sticks report 28.5 MB used RAM, which might be a little tight on ERS-210s (which have 32MB of RAM -- ERS-7s have 64MB, so it's not as much of a concern there)
- The release benchmarks for the SegmentedColorGenerator and RLEGenerator vision pipeline stages show CPU usage is up 4.6ms per frame (previously 2.0 ms/frame). Since there wasn't much code added there, some investigation may be in order. (should be a lot easier now we can use profiling tools on the desktop!)
|
|
- New
Features: »
- Report internet (IP) address for use within unknown
or dynamically assigned networks »
- Press and hold chin and head buttons for 2
seconds to trigger (make sure estop is off so the Controller doesn't
trap the button events)
- New configuration items flash_on_start and flash_bytes will allow
automatic reporting on boot and pruning of the static network portion
- Numeric sound clips are available in ms/data/sound/numbers
- VisionObjectEvent
now contains boundary box information so you
don't have to dig up the corresponding region to find it. »
- Also
now
contains clipping flags so you can easily test if the object extends
beyond the camera image.
- Thanks to Ignacio Herrero Reder
- Make the ControllerGUI "title" field into a drop
down menu »
- add isListening()
suite to EventRouter »
- buildRelease should replace search.php with
http://cvs.tekkotsu.org/search.php »
- MotionSequenceEngine::getPose()
allows access to current posture »
- Sensor Observer should be able to give real time
feedback within the Controller »
- need ModelInfo::sensorNames[] for user
feedback »
- Network Status report provides LED feedback on
wireless strength (thanks Kate Libby) »
- add new file templates (Control, State, Transition)
»
- API Changes:
- StateNode provide postStartEvent(), postCompletionEvent(), postStopEvent() functions »
- EventRouter::addListener(...) for timerEGID is deprecated »
- Coordinates in floating point representation
(-1..1) are now “square”, so that the range of the y-axis is no longer
(-1..1), but instead a function of the aspect ratio (±0.769 on
ERS-7) »
- PostureMC*, HeadPointerMC*, and
MotionSequenceMC*
now
actively maintain joint positions after their final target has been
reached. A new parameter, hold,
can be used to turn this off to revert to the previous behavior.
(doesn't affect pruned motion commands)
- PostureMC* and
HeadPointerMC*
have a timeout to
prevent deadlock between conflicting motion commands
- PostureMC now has speed limits to prevent
"snapping" between postures (speed limits can be modified or turned
off), also generates status event when posture is reached »
- StateNode::transitionTo()
and transitionFrom
superfluous »
- Bug
Fixes: »
- spaces in MEMSTICK_ROOT
setting cause trouble »
- Focus on control items after each reconnect »
- Java 1.5 yields deprecation warnings »
- Posture Editor doesn't keep joints "active" »
- MotionManager::getOutputCmd()
returns value of 0 when joint is unused, should return joint position »
- EStop causes oscillation when activated during fast
motion (e.g. walking) »
- MotionSequences which specified an initial (t=0)
frame lose it through SaveFile »
- ModelInfo::ButtonNames should be buttonNames »
- ButtonNames
isn't defined in ERS2xxInfo »
- MotoObj and SoundPlay overflow event translator
upon startup »
- ButtonNames skips "RBkPaw" »
- PlayMotionSequenceNode should be renamed
MotionSequenceNode for consistency »
- Socket::printf(...) and cousins don't emit
compiler warnings for format strings »
- Overview diagram is out of date (no single "vision"
global any more) »
- Known Bugs:
- MotionManager won't handle adding same SharedObject
again, while still already active »
- Event Logger doesn't show timer events (need timer
cross-talk) »
- ControllerGUI grabs focus on display switch »
|
|
- New
Features: »
- add
body
interest points »
- concise
printing of column vectors via NEWMAT::printmat wrapper
class »
- TimeOutTrans can be extended to time out if an
event is not received
within a certain amount of time »
- HeadPointerMC will give warning message if its
joint index argument is
out of bounds »
- MotionSequence::LoadFile should be able to load
posture file »
- added
Config::vision_config::computePixel(),
the inverse of Config::vision_config::computeRay() »
- added WalkMC::setTargetDisplacement
allows walk n steps in a
given direction »
- created SpiderMachineBehavior,
to be used with as-yet unreleased GUI tool for viewing and logging
state machines »
- API Changes:
- Kinematics
uses terminology "link frame" vs. "reference frame", which is a poor
word choice. Better to say "link reference frame" vs. "joint
reference frame". Documenation and function names to be changed. »
- Frame
=> Joint ; e.g.
linkToFrame => linkToJoint
- Motion
Sequence class names cleaned up. Old names will
be available but deprecated. Be warned however - the compiler
doesn't seem to be accepting the deprecated attribute for typedefs as
it's supposed to, so you won't be getting compiler warnings for
deprecated use. »
- MotionSequence
=> MotionSequenceEngine
- MotionSequenceMC<MotionSequence::SizeXXX>
=> XXXMotionSequenceMC
(XXX is one of Small, Medium, Large, XLarge)
- MotionSequence::SizeXXX
=> XXXMotionSequence::CAPACITY
- [gs]etPlayTime => [gs]etTime
- [gs]etPlaySpeed => [gs]etSpeed
- added advanceTime(unsigned
int x)
- To lessen the learning curve, .mot file commands are now the
same as the corresponding function names, e.g. setTime instead of settime, and advanceTime instead of delay; however, old style .mot commands are still
supported for backward compatability.
- Bug Fixes: »
- PostureEditor Load Posture disappears (and doesn't
refresh when e-stop
is turned off »
- HeadPointerMC::lookAtPoint reference frame not
explained in
documentation »
- Save Posture doesn't assume .pos extension (also
fixed for other file
saving »
controls)
- interest
points A,B should be E,F in ERS-7 head kinematics diagram »
- addMotion
followed immediately by removeMotion will cause a crash »
- HeadFrButOffset is now available
for ERS-7 to ease porting from ERS-210 »
- WalkToTargetMachine's "lost" timeout is too fast,
can cause timeout
before walk parameters are finished loading, thus never gets anywhere
on older 210 models »
- linkToFrame and frameToLink return identity when
they shouldn't »
- getInterestPoint returns -1 for non-joint reference
frames, should
return the reference frame offset »
- MotionCommands which maintain dirty status should
set dirty flag on
DoStart() »
- Event
names are ugly, now standardized as (gen,source,type), more generators upgraded to
name their sources for more readable debugging »
- 'make
install' should trigger warning suggesting 'make newstick' if target
model does not match the system binaries on the memory stick »
- hasListener for timerEGID is always false, should
correctly handle the
timer special case (now returns true if any behavior has a timer) »
- ERS7 Head
kinematics diagram incorrectly shows link for chest IR »
- Default
for unrecognized robot design strings should be ERS-7, not 210 »
- HeadPointerMC::lookAtPoint(x,y,z) very poor
implementation »
- Known Bugs:
- PaceTargetsMachine crashes on occasion (fixed 2.3) »
- Java 1.5 yields deprecation warnings (but still
seems to work OK) (fixed 2.3) »
|
|
- New Features:
- Added joint calibration parameters in tekkotsu.cfg
(mainly needed
for ERS-7 legs)
- Vision pipeline
has lower overhead by leveraging erouter events added in 2.2
- Better handling of libraries and intermediary
compilation units has reduced link time by 33%
- Registry
maintained of running behaviors, see the new BehaviorReportControl
(under "Status Reports" in the GUI)
- New Kinematics
demo code (WallTestBehavior,
mainly
demo of IR usage, but also shows two ways to do linear least squares
using newmat)
- API Changes:
- Cleaned up Behavior names - we've been muddling the
difference between the name of the class (type) and the name of the
instance.
- BehaviorBase::setName()
added, BehaviorBase::getName()
is no longer necessary for subclasses to override (BehaviorBase::getName() simply
returns BehaviorBase::instanceName)
- Also added getClassName(),
which will return the value passed to BehaviorBase's constructor, as
compared to getName(),
which will default to the same value, but can be changed via setName()
- This means that the BehaviorBase() constructor is
deprecated - you should pass your class name to the BehaviorBase(std::string name)
constructor
- TailWagMC cleaned up for
ERS-7, directly accessable through a TailWagNode
in the Background Behaviors menu; TailWagMC::active
is now true by default.
- LedEngine::cycle()'s values
are now clipped to the range [0,1] for more sensible blending of
square-wave approximations (which use high-amplitude sine waves)
- Bug Fixes:
- Chained sound files which transition mid-buffer are
now handled properly (thanks Alexander Klyubin)
- Fixed potential problem with long model names
(thanks Alexander
Klyubin)
- ftpupdate
is a little smarter about which files have been copied by make update or make install, so you can
intermix the methods.
- StateNode status events use the state's name for
status stateMachineEGID
events (already used for activate and deactivate events)
- Fixed ERS-2xx IR
reference frame mistake (and related diagrams)
- safemot and
convertmot tools fixed
- Makefile
suggests better build directories names in /tmp to avoid
conflicts between multiple projects (default is to build in source
directory, /tmp is just an option)
- Split part of project/Makefile into
project/Environment.conf - this is then included by library and tool
Makefiles to allow people to change settings in Environment.conf
instead of having to set environment variables.
- Smarter handling of relative pathnames in TEKKOTSU_ROOT
- crashDebug scans
Makefile to find current TEKKOTSU_TARGET_MODEL
setting
- emonLogParser
(used by crashDebug) works without needing OPEN_R_SDK/bin in your path
(and thus crashDebug does too)
- "Save Image Sequence" button in VisionGUI now
properly switches to "Save Image" after "Freeze Frame" when using UDP
as the transport (which is the default setting)
|
|
- New Features:
- Improved kinematics
engine for
manipulation tasks
- Provided by ROBOOP and newmat.
- Includes "interest point" database, calibrated IR
beams, ground plane projection and other useful utility code
- WaypointEngine
and WaypointWalkMC
allows following lines and circular
arcs between points, with independent heading
- UDP streaming
support
- Thanks to Bryan Johnson and Erik Berglund
- UDP is now default for vision streams, can be
configured in the tekkotsu.cfg file, through the raw_transport and rle_transport variables.
- Sound input
support
- Thanks Paris
Smaragdis
- Original system data sent as event using event
generator ID micOSndEGID; micRawEGID and micFFTEGID are reserved for
further development in 2.3
- Not to be confused with audioEGID, already used for
reporting sound playback
- EventRouter
will send events when a generator's first listener is added or last
listener is removed so the generators
(e.g. Vision stages) can dynamically turn on/off based on dependancies
- Build system updated:
- Separate build directory for each target, so
clean compiles are unnecessary when switching between targets
- Separates intermediary files (.d, .o, .log files)
into build directory, so they aren't mixed in with source files
- Makes it easier to have source files in some
shared file system area, but keep intermediary files in a local build
directory for faster processing
- TARGET_MODEL
file now unnecessary, set TEKKOTSU_TARGET_MODEL
environment variable instead
- New WalkMC
parameters:
- useDiffDrive - when turning, legs are moved in
reverse
directions instead of swinging out to the sides to twist the body.
- sag - on the airborne return part of each step,
plan to hit the ground
higher than it should be if the supporting legs were ideally rigid.
This allows
you to avoid hitting the ground while still moving in the wrong
direction.
- The cause for these was to support a manipulation
class project
- Includes TileTrain, an alternative color
segmentation trainer
- API Changes:
- ERS-7 is now the default build target. For
instructions on switching build targets, see the Installation
instructions.
- AutoGetupBehavior is now disabled by default at boot
- ControllerGUI no longer requires an initial FTP
connection - you can launch the ControllerGUI before booting the AIBO.
- Two new walk parameters, see above
- Array of EGID names changed to match the
enumeration name
- Events now have a getDescription()
- LedMC can now autoprune after a flash (so beware of
unintended autopruning in existing code)
- MotionManager::addMotion
is deprecated, please use one of addPrunableMotion
or addPersistentMotion
instead - this is to avoid potential confusion over whether a motion
might delete itself
- EventRouter::forgetListener()
is deprecated, EventRouter::removeListener(this)
will now remove both timers and events, and should be used instead.
- HeadPointerMC
has been significantly cleaned up - gravity relative commands have been
removed, kinematic functions have been added
- SoundManager
uses std::string instead
of char* for filenames
- Certain MotionCommands will now send an (motmanEGID,mc_id,statusETID) event for
intermediary status - for instance, when the HeadPointerMC reaches its
target location, or an LedMC flash has been completed. More to
follow.
- PostureEditor
and RunMotionSequence no longer override the EStop.
RunMotionSequence will wait for EStop to be turned off before playing.
- Bug Fixes:
- Kinematics
updated to support ERS-7 manipulation
- No longer crashes if there is not a "red", "blue",
and/or "brown" color defined
- Updated TinyFTPD to latest Sony sample code, adds
support for passive mode
- ftpinstall and ftpupdate tools now use passive
mode by default
- MotionSequenceMC:
- Handles all three common line endings (Unix:
'\n', DOS: '\r\n', pre-OSX Mac: '\r')
- Fixed double-allocation when adding key frames
- Added warnings on serr when a MotionSequence runs
out of buffer space (instead of silently failing and hanging on
playback)
|
|
- Update 2004-04-13: Patch
set available for adding microphone support -
includes a sample behavior for looking in the direction of the loudest
sound!
- Update 2004-07-06: Patch
set available for telepresence (relative to 2004-06-21 snapshot)
- Update 2004-07-06:
Alternative segmentation trainer
- Update 2004-09-04: UDP
transport support
- New Features:
- Support for (and now requires) OPEN-R SDK version
1.1.5
- Better walk calibration basis function
- Now uses a gabor function of rotational speed
instead of heading angle
- Bug Fixes:
- Corrected initialization problem with WalkMC where
legs would try to straighten out before taking the first step.
(looks like a stumble on the first step)
- Code cleanup in StartupBehavior.cc
- Corrected method of spawning the vision pipeline
stages
- Fixed ControllerGUI launcher script - thanks Daishi
MORI
|
|
Older release notes are available on a separate page:
|
|
|