Visual Servoing Platform version 3.6.0
Loading...
Searching...
No Matches
servoPioneerPoint2DDepthWithoutVpServo.cpp

Example that shows how to control the Pioneer mobile robot by IBVS visual servoing with respect to a blob. The current visual features that are used are s = (x, log(Z/Z*)). The desired one are s* = (x*, 0), with:

The degrees of freedom that are controlled are (vx, wz), where wz is the rotational velocity and vx the translational velocity of the mobile platform at point M located at the middle between the two wheels.

The feature x allows to control wy, while log(Z/Z*) allows to control vz. The value of x is measured thanks to a blob tracker. The value of Z is estimated from the surface of the blob that is proportional to the depth Z.

/****************************************************************************
*
* ViSP, open source Visual Servoing Platform software.
* Copyright (C) 2005 - 2023 by Inria. All rights reserved.
*
* This software is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
* See the file LICENSE.txt at the root directory of this source
* distribution for additional information about the GNU GPL.
*
* For using ViSP with software that can not be combined with the GNU
* GPL, please contact Inria about acquiring a ViSP Professional
* Edition License.
*
* See https://visp.inria.fr for more information.
*
* This software was developed at:
* Inria Rennes - Bretagne Atlantique
* Campus Universitaire de Beaulieu
* 35042 Rennes Cedex
* France
*
* If you have questions regarding the use of this file, please contact
* Inria at visp@inria.fr
*
* This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
* WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*
* Description:
* IBVS on Pioneer P3DX mobile platform
*
*****************************************************************************/
#include <iostream>
#include <visp3/blob/vpDot2.h>
#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpConfig.h>
#include <visp3/core/vpHomogeneousMatrix.h>
#include <visp3/core/vpImage.h>
#include <visp3/core/vpImageConvert.h>
#include <visp3/core/vpVelocityTwistMatrix.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/robot/vpRobotPioneer.h> // Include first to avoid build issues with Status, None, isfinite
#include <visp3/sensor/vp1394CMUGrabber.h>
#include <visp3/sensor/vp1394TwoGrabber.h>
#include <visp3/sensor/vpV4l2Grabber.h>
#include <visp3/visual_features/vpFeatureBuilder.h>
#include <visp3/visual_features/vpFeatureDepth.h>
#include <visp3/visual_features/vpFeaturePoint.h>
#if defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio.hpp>
#endif
#if defined(VISP_HAVE_DC1394) || defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_CMU1394) || defined(HAVE_OPENCV_VIDEOIO)
#if defined(VISP_HAVE_X11) || defined(VISP_HAVE_GDI)
#if defined(VISP_HAVE_PIONEER)
#define TEST_COULD_BE_ACHIEVED
#endif
#endif
#endif
#undef VISP_HAVE_OPENCV // To use a firewire camera
#undef VISP_HAVE_V4L2 // To use a firewire camera
#ifdef TEST_COULD_BE_ACHIEVED
int main(int argc, char **argv)
{
try {
vpImage<unsigned char> I; // Create a gray level image container
double depth = 1.;
double lambda = 0.6;
double coef = 1. / 6.77; // Scale parameter used to estimate the depth Z
// of the blob from its surface
ArArgumentParser parser(&argc, argv);
parser.loadDefaultArguments();
// ArRobotConnector connects to the robot, get some initial data from it
// such as type and name, and then loads parameter files for this robot.
ArRobotConnector robotConnector(&parser, &robot);
if (!robotConnector.connectRobot()) {
ArLog::log(ArLog::Terse, "Could not connect to the robot.");
if (parser.checkHelpAndWarnUnparsed()) {
Aria::logOptions();
Aria::exit(1);
}
}
if (!Aria::parseArgs()) {
Aria::logOptions();
Aria::shutdown();
return false;
}
// Wait 3 sec to be sure that the low level Aria thread used to control
// the robot is started. Without this delay we experienced a delay
// (around 2.2 sec) between the velocity send to the robot and the
// velocity that is really applied to the wheels.
std::cout << "Robot connected" << std::endl;
// Camera parameters. In this experiment we don't need a precise
// calibration of the camera
// Create the camera framegrabber
#if defined(HAVE_OPENCV_VIDEOIO)
int device = 1;
std::cout << "Use device: " << device << std::endl;
cv::VideoCapture g(device); // open the default camera
g.set(CV_CAP_PROP_FRAME_WIDTH, 640);
g.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
if (!g.isOpened()) // check if we succeeded
return EXIT_FAILURE;
cv::Mat frame;
g >> frame; // get a new frame from camera
// Logitec sphere parameters
cam.initPersProjWithoutDistortion(558, 555, 312, 210);
#elif defined(VISP_HAVE_V4L2)
// Create a grabber based on v4l2 third party lib (for usb cameras under
// Linux)
g.setScale(1);
g.setInput(0);
g.setDevice("/dev/video1");
g.open(I);
// Logitec sphere parameters
cam.initPersProjWithoutDistortion(558, 555, 312, 210);
#elif defined(VISP_HAVE_DC1394)
// Create a grabber based on libdc1394-2.x third party lib (for firewire
// cameras under Linux)
vp1394TwoGrabber g(false);
// AVT Pike 032C parameters
cam.initPersProjWithoutDistortion(800, 795, 320, 216);
#elif defined(VISP_HAVE_CMU1394)
// Create a grabber based on CMU 1394 third party lib (for firewire
// cameras under windows)
g.setVideoMode(0, 5); // 640x480 MONO8
g.setFramerate(4); // 30 Hz
g.open(I);
// AVT Pike 032C parameters
cam.initPersProjWithoutDistortion(800, 795, 320, 216);
#endif
// Acquire an image from the grabber
#if defined(HAVE_OPENCV_VIDEOIO)
g >> frame; // get a new frame from camera
#else
g.acquire(I);
#endif
// Create an image viewer
#if defined(VISP_HAVE_X11)
vpDisplayX d(I, 10, 10, "Current frame");
#elif defined(VISP_HAVE_GDI)
vpDisplayGDI d(I, 10, 10, "Current frame");
#endif
// Create a blob tracker
vpDot2 dot;
dot.setGraphics(true);
dot.setComputeMoments(true);
dot.setEllipsoidShapePrecision(0.); // to track a blob without any constraint on the shape
dot.setGrayLevelPrecision(0.9); // to set the blob gray level bounds for binarisation
dot.setEllipsoidBadPointsPercentage(0.5); // to be accept 50% of bad inner
// and outside points with bad
// gray level
dot.initTracking(I);
// Current and desired visual feature associated to the x coordinate of
// the point
vpFeaturePoint s_x, s_xd;
// Create the current x visual feature
vpFeatureBuilder::create(s_x, cam, dot);
// Create the desired x* visual feature
s_xd.buildFrom(0, 0, depth);
// Create the current log(Z/Z*) visual feature
// Surface of the blob estimated from the image moment m00 and converted
// in meters
double surface = 1. / sqrt(dot.m00 / (cam.get_px() * cam.get_py()));
double Z, Zd;
// Initial depth of the blob in from of the camera
Z = coef * surface;
// Desired depth Z* of the blob. This depth is learned and equal to the
// initial depth
Zd = Z;
s_Z.buildFrom(s_x.get_x(), s_x.get_y(), Z,
0); // log(Z/Z*) = 0 that's why the last parameter is 0
vpMatrix L_Z = s_Z.interaction();
vpMatrix eJe; // pioneer jacobian
robot.get_eJe(eJe);
vpMatrix L; // Interaction matrix
L.stack(L_x); // constant since build with the desired feature
L.stack(L_Z); // not constant since it corresponds to log(Z/Z*) that
// evolves at each iteration
vpColVector v; // vz, wx
s_Zd.buildFrom(0, 0, 1, 0); // The value of s* is 0 with Z=1 meter.
while (1) {
// Acquire a new image
#if defined(HAVE_OPENCV_VIDEOIO)
g >> frame; // get a new frame from camera
#else
g.acquire(I);
#endif
// Set the image as background of the viewer
// Does the blob tracking
dot.track(I);
// Update the current x feature
vpFeatureBuilder::create(s_x, cam, dot);
// Update log(Z/Z*) feature. Since the depth Z change, we need to update
// the intection matrix
surface = 1. / sqrt(dot.m00 / (cam.get_px() * cam.get_py()));
Z = coef * surface;
s_Z.buildFrom(s_x.get_x(), s_x.get_y(), Z, log(Z / Zd));
L_Z = s_Z.interaction();
// Update the global interaction matrix
L.stack(L_x); // constant since build with the desired feature
L.stack(L_Z); // not constant since it corresponds to log(Z/Z*) that
// evolves at each iteration
// Update the global error s-s*
vpColVector error;
error.stack(s_x.error(s_xd, vpFeaturePoint::selectX()));
error.stack(s_Z.error(s_Zd));
// Compute the control law. Velocities are computed in the mobile robot
// reference frame
v = -lambda * (L * cVe * eJe).pseudoInverse() * error;
std::cout << "Send velocity to the pionner: " << v[0] << " m/s " << vpMath::deg(v[1]) << " deg/s" << std::endl;
// Send the velocity to the robot
// Draw a vertical line which corresponds to the desired x coordinate of
// the dot cog
vpDisplay::displayLine(I, 0, 320, 479, 320, vpColor::red);
// A click in the viewer to exit
if (vpDisplay::getClick(I, false))
break;
}
std::cout << "Ending robot thread..." << std::endl;
robot.stopRunning();
// wait for the thread to stop
robot.waitForRunExit();
return EXIT_SUCCESS;
}
catch (const vpException &e) {
std::cout << "Catch an exception: " << e << std::endl;
return EXIT_FAILURE;
}
}
#else
int main()
{
std::cout << "You don't have the right 3rd party libraries to run this example..." << std::endl;
return EXIT_SUCCESS;
}
#endif
Firewire cameras video capture based on CMU 1394 Digital Camera SDK.
void setVideoMode(unsigned long format, unsigned long mode)
Class for firewire ieee1394 video devices using libdc1394-2.x api.
Generic class defining intrinsic camera parameters.
void initPersProjWithoutDistortion(double px, double py, double u0, double v0)
Implementation of column vector and the associated operations.
void stack(double d)
static const vpColor red
Definition vpColor.h:211
Display for windows using GDI (available on any windows 32 platform).
Use the X11 console to display images on unix-like OS. Thus to enable this class X11 should be instal...
Definition vpDisplayX.h:132
static bool getClick(const vpImage< unsigned char > &I, bool blocking=true)
static void display(const vpImage< unsigned char > &I)
static void displayLine(const vpImage< unsigned char > &I, const vpImagePoint &ip1, const vpImagePoint &ip2, const vpColor &color, unsigned int thickness=1, bool segment=true)
static void flush(const vpImage< unsigned char > &I)
This tracker is meant to track a blob (connex pixels with same gray level) on a vpImage.
Definition vpDot2.h:124
void track(const vpImage< unsigned char > &I, bool canMakeTheWindowGrow=true)
Definition vpDot2.cpp:441
void setGraphics(bool activate)
Definition vpDot2.h:311
double m00
Definition vpDot2.h:372
void setGrayLevelPrecision(const double &grayLevelPrecision)
Definition vpDot2.cpp:717
void setEllipsoidBadPointsPercentage(const double &percentage=0.0)
Definition vpDot2.h:287
void setEllipsoidShapePrecision(const double &ellipsoidShapePrecision)
Definition vpDot2.cpp:788
void setComputeMoments(bool activate)
Definition vpDot2.h:273
void initTracking(const vpImage< unsigned char > &I, unsigned int size=0)
Definition vpDot2.cpp:252
error that can be emitted by ViSP classes.
Definition vpException.h:59
static void create(vpFeaturePoint &s, const vpCameraParameters &cam, const vpDot &d)
Class that defines a 3D point visual feature which is composed by one parameters that is that defin...
void buildFrom(double x, double y, double Z, double LogZoverZstar)
vpMatrix interaction(unsigned int select=FEATURE_ALL)
vpColVector error(const vpBasicFeature &s_star, unsigned int select=FEATURE_ALL)
Class that defines a 2D point visual feature which is composed by two parameters that are the cartes...
void buildFrom(double x, double y, double Z)
static unsigned int selectX()
vpColVector error(const vpBasicFeature &s_star, unsigned int select=FEATURE_ALL)
double get_y() const
double get_x() const
vpMatrix interaction(unsigned int select=FEATURE_ALL)
static void convert(const vpImage< unsigned char > &src, vpImage< vpRGBa > &dest)
Definition of the vpImage class member functions.
Definition vpImage.h:135
static double deg(double rad)
Definition vpMath.h:106
Implementation of a matrix and operations on matrices.
Definition vpMatrix.h:152
Interface for Pioneer mobile robots based on Aria 3rd party library.
void setVelocity(const vpRobot::vpControlFrameType frame, const vpColVector &vel)
void get_eJe(vpMatrix &eJe)
@ REFERENCE_FRAME
Definition vpRobot.h:74
vpVelocityTwistMatrix get_cVe() const
Definition vpUnicycle.h:79
Class that is a wrapper over the Video4Linux2 (V4L2) driver.
void setScale(unsigned scale=vpV4l2Grabber::DEFAULT_SCALE)
VISP_EXPORT void sleepMs(double t)