Thursday, December 13, 2018

AWS RoboMaker: Robot Operating System (ROS), with connectivity to cloud services.



https://aws.amazon.com/robomaker/
AWS RoboMaker

Easily develop, test, and deploy intelligent robotics applications

AWS RoboMaker is a service that makes it easy to develop, test, and deploy intelligent robotics applications at scale. RoboMaker extends the most widely used open-source robotics software framework, Robot Operating System (ROS), with connectivity to cloud services. This includes AWS machine learning services, monitoring services, and analytics services that enable a robot to stream data, navigate, communicate, comprehend, and learn. RoboMaker provides a robotics development environment for application development, a robotics simulation service to accelerate application testing, and a robotics fleet management service for remote application deployment, update, and management.

Robots are machines that sense, compute, and take action. Robots need instructions to accomplish tasks, and these instructions come in the form of applications that developers code to determine how the robot will behave. Receiving and processing sensor data, controlling actuators for movement, and performing a specific task are all functions that are typically automated by these intelligent robotics applications. Intelligent robots are being increasingly used in warehouses to distribute inventory, in homes to carry out tedious housework, and in retail stores to provide customer service. Robotics applications use machine learning in order to perform more complex tasks like recognizing an object or face, having a conversation with a person, following a spoken command, or navigating autonomously. Until now, developing, testing, and deploying intelligent robotics applications was difficult and time consuming. Building intelligent robotics functionality using machine learning is complex and requires specialized skills. Setting up a development environment can take each developer days and building a realistic simulation system to test an application can take months due to the underlying infrastructure needed. Once an application has been developed and tested, a developer needs to build a deployment system to deploy the application into the robot and later update the application while the robot is in use.

AWS RoboMaker provides the tools to make building intelligent robotics applications more accessible, a fully managed simulation service for quick and easy testing, and a deployment service for lifecycle management. AWS RoboMaker removes the heavy lifting from each step of robotics development so you can focus on creating innovative robotics applications.



What is AWS RoboMaker?

How it works


AWS RoboMaker provides four core capabilities for developing, testing, and deploying intelligent robotics applications.
Cloud Extensions for ROS


Robot Operating System, or ROS, is the most widely used open source robotics software framework, providing software libraries that help you build robotics applications. AWS RoboMaker provides cloud extensions for ROS so that you can offload to the cloud the more resource-intensive computing processes that are typically required for intelligent robotics applications and free up local compute resources. These extensions make it easy to integrate with AWS services like Amazon Kinesis Video Streams for video streaming, Amazon Rekognition for image and video analysis, Amazon Lex for speech recognition, Amazon Polly for speech generation, and Amazon CloudWatch for logging and monitoring. RoboMaker provides each of these cloud service extensions as open source ROS packages, so you can build functions on your robot by taking advantage of cloud APIs, all in a familiar software framework.
Development Environment


AWS RoboMaker provides a robotics development environment for building and editing robotics applications. The RoboMaker development environment is based on AWS Cloud9, so you can launch a dedicated workspace to edit, run, and debug robotics application code. RoboMaker's development environment includes the operating system, development software, and ROS automatically downloaded, compiled, and configured. Plus, RoboMaker cloud extensions and sample robotics applications are pre-integrated in the environment, so you can get started in minutes.
Simulation


Simulation is used to understand how robotics applications will act in complex or changing environments, so you don’t have to invest in expensive hardware and set up of physical testing environments. Instead, you can use simulation for testing and fine-tuning robotics applications before deploying to physical hardware. AWS RoboMaker provides a fully managed robotics simulation service that supports large scale and parallel simulations, and automatically scales the underlying infrastructure based on the complexity of the simulation. RoboMaker also provides pre-built virtual 3D worlds such as indoor rooms, retail stores, and race tracks so you can download, modify, and use these worlds in your simulations, making it quick and easy to get started.
Fleet Management


Once an application has been developed or modified, you’d build an over-the-air (OTA) system to securely deploy the application into the robot and later update the application while the robot is in use. AWS RoboMaker provides a fleet management service that has robot registry, security, and fault-tolerance built-in so that you can deploy, perform OTA updates, and manage your robotics applications throughout the lifecycle of your robots. You can use RoboMaker fleet management to group your robots and update them accordingly with bug fixes or new features, all with a few clicks in the console.



Benefits

Get started quickly


AWS RoboMaker includes sample robotics applications to help you get started quickly. These provide the starting point for the voice command, recognition, monitoring, and fleet management capabilities that are typically required for intelligent robotics applications. Sample applications come with robotics application code (instructions for the functionality of your robot) and simulation application code (defining the environment in which your simulations will run). The sample simulation applications come with pre-built worlds such as indoor rooms, retail stores, and racing tracks so you can get started in minutes. You can modify and build on the code of the robotics application or simulation application in the development environment or use your own custom applications.


Build intelligent robots


Because AWS RoboMaker is pre-integrated with popular AWS analytics, machine learning, and monitoring services, it’s easy to add functions like video streaming, face and object recognition, voice command and response, or metrics and logs collection to your robotics application. RoboMaker provides extensions for cloud services like Amazon Kinesis (video stream), Amazon Rekognition (image and video analysis), Amazon Lex (speech recognition), Amazon Polly (speech generation), and Amazon CloudWatch (logging and monitoring) to developers who are using Robot Operating System, or ROS. These services are exposed as ROS packages so that you can easily use them to build intelligent functions into your robotics applications without having to learn a new framework or programming language.


Lifecycle management


Manage the lifecycle of a robotics application from building and deploying the application, to monitoring and updating an entire fleet of robots. Using AWS RoboMaker fleet management, you can deploy an application to a fleet of robots. Using the CloudWatch metrics and logs extension for ROS, you can monitor these robots throughout their lifecycle to understand CPU, speed, memory, battery, and more. When a robot needs an update, you can use RoboMaker simulation for regression testing before deploying the fix or new feature through RoboMaker fleet management.


Thursday, October 11, 2018

Fwd: OpenMV News

---------- Forwarded message ----------
From: "OpenMV" <openmv@openmv.io>
Date: Oct 10, 2018 11:53 PM
Subject: OpenMV News
To: "John" <john.sokol@gmail.com>
Cc:

OpenMV Home - https://openmv.io/
View this email in your browser

Better CMSIS-NN Support

OCT 11, 2018 POSTED BY: KWABENA AGYEMAN

Hi folks - time for a short update,

First, thanks to everyone who's backed our OpenMV Cam H7 Kickstarter! We've raised 70K for the OpenMV Cam H7 now! Awesome! Anyway, If you haven't backed us yet please do! We've still got a few days left on the Kickstarter. https://www.kickstarter.com/projects/1798207217/openmv-cam-h7-machine-vision-w-micropython

Next, I spent some time updating the CMSIS-NN examples on the OpenMV Cam Github. We now have a README that walks you through how to use the library with exact command line values to run:

https://github.com/kwagyeman/openmv/tree/cmsis_improvements/ml/cmsisnn

With this new guide and a deep-learning machine you can now actually train networks. You can run all networks on the OpenMV Cam H7. For the OpenMV M7 only the smile and cifar10_fast networks are small enough to be runnable. In particular, networks need to be no more than 30KB or so. Anyway, if you want to create your own custom CNN now you can do so by following our README walk through on how we made our smile detection CNN. Once you've got a deep-learning rig and caffe installed then training a new network is very easy.

Finally, there was a bug in the CMSIS-NN code from ARM that has now been fixed which was previously causing issues with running your own CNN. It has now been fixed on the master of the OpenMV Cam GitHub.

Anyway, we're going to try to get an IDE release out with all these fixes along with new CNN examples now that we've documented how to do things.

Thanks for reading,

Copyright © 2018 OpenMV, LLC, All rights reserved.
You are receiving this email because you provided your email address to OpenMV at some point in time. Please opt out if you do not want to receive communications from OpenMV.

Our mailing address is:
OpenMV, LLC
PO Box 1577 #22900
Atlanta, GA 30301

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Monday, September 24, 2018

Goertzel filter




https://en.wikipedia.org/wiki/Goertzel_algorithm

Many applications require the detection of a few discrete sinusoids. The Goertzel filter is an IIR filter that uses the feedback to generate a very high Q bandpass filter where the coefficients are easily generated from the required centre frequency, according to the following equations. The most common configuration for using this technique is to measure the signal energy before and after the filter and to compare the two. If the energies are similar then the input signal is centred in the pass-band, if the output energy is significantly lower than the input energy then the signal is outside the pass band. The Goertzel algorithm is most commonly implemented as a second order recursive IIR filter, as shown below.



https://github.com/jacobrosenthal/Goertzel

Matlab Code
function Xk = goertzel_non_integer_k(x,k)
%   Computes an N-point DFT coefficient for a 
%   real-valued input sequence 'x' where the center 
%   frequency of the DFT bin is k/N radians/sample.  
%   N is the length of the input sequence 'x'.  
%   Positive-valued frequency index 'k' need not be
%   an integer but must be in the range of 0 –to- N-1.
 
%   [Richard Lyons, Oct. 2013]
 
N = length(x);
Alpha = 2*pi*k/N;
Beta = 2*pi*k*(N-1)/N;
 
% Precompute network coefficients
Two_cos_Alpha = 2*cos(Alpha);
a = cos(Beta);
b = -sin(Beta);
c = sin(Alpha)*sin(Beta) -cos(Alpha)*cos(Beta);
d = sin(2*pi*k);
 
% Init. delay line contents
w1 = 0;
w2 = 0;
 
for n = 1:N % Start the N-sample feedback looping
    w0 = x(n) + Two_cos_Alpha*w1 -w2;
    % Delay line data shifting
      w2 = w1;
      w1 = w0;
end
 
Xk = w1*a + w2*c + j*(w1*b +w2*d);


Quiet Beacon

Low-powered beacon transmitter/receiver which can be used either on its own or in addition to libquiet. The transmitter creates a simple sine tone at a specified frequency, and the receiver uses the '''Goertzel Algorithm''' to detect the presence of the tone with complexity O(n) for n samples.




Saturday, September 22, 2018

NaviPack LiDAR Navigation Module


https://www.indiegogo.com/projects/navipack-lidar-navigation-module-reinvented#/

https://www.youtube.com/watch?v=SBhIdXVnoZU&feature=share

https://robot.imscv.com/en/product/3D%20LIDAR


NaviPack makes any device smarter and easier to control. It uses the latest LiDAR technology and powerful APIs to create easy solutions for automated devices.

With the built-in SLAM algorithm chip, Navipack is the first plug-and-play type of LIDAR navigation module. NaviPack is also the most affordable LiDAR solution for drones, robots and other devices and instantly enables them with powerful 360-degree sensing capabilities

NaviPack integrates the SLAM algorithm with the LiDAR sensor module, making it super easy to use and significantly reducing development time.

NaviPack performs 360 degree scanning of its surroundings and all objects up to 15 meters away with a frequency of 4000 points per second. It is super easy to use! With the built-in SLAM module, it will start working immediately after plugging into your devices - scanning the environment, building a detailed map, and enabling auto-moving capability.


 navipack ks explosion.jpg








Wednesday, September 12, 2018

Keypoints in computer vision - OpenCV3 techniques

OpenCV3 - Keypoints in Computer Vision by Dr. Adrian Kaehler, Ph.D.




https://www.youtube.com/watch?v=tjuaZGvlBh4






Another good talk from him,


Future Talk #91 - Machine Vision, Deep Learning and Robotics

https://www.youtube.com/watch?v=kPq4lYGr7rE
A discussion of machine vision, deep learning and robotics with Adrian Kaehler, founder and CEO of Giant.AI and founder of the Silicon Valley Deep Learning Group

Feynman's technique





“I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. [It] showed how to differentiate parameters under the integral sign — it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. [If] guys at MIT or Princeton had trouble doing a certain integral, [then] I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.” (Surely you’re Joking, Mr. Feynman!)




Complex numbers, Quaternions and Octonions




https://en.wikipedia.org/wiki/Real_number   2^0  = 1 Dimention

https://en.wikipedia.org/wiki/Complex_number  2^1 = 2 Dimentions

https://en.wikipedia.org/wiki/Quaternion  2^2 = 4 Dimentions

https://en.wikipedia.org/wiki/Octonion  2^3 = 8 Dimentions

https://en.wikipedia.org/wiki/Sedenion  2^4  = 16 Dimentions

Trigintaduonions  2^5  = 32 Dimensions





In the construction of types of numbers, we have the following sequence:
RCHOS

or:


or:
"Reals"  "Complex"  "Quaternions"  "Octonions"  "Sedenions"
With the following "properties":
  • From R to C you gain "algebraic-closure"-ness (but you throw away ordering).
  • From C to H we throw away commutativity.
  • From H to O we throw away associativity.
  • From O to S we throw away multiplicative normedness.

Why am I talking about this, Well specifically Quaternions are of interest for Robotics. 

There are many different parameterizations for orientations:
  • Euler Angles 
  • Angle Axis
  • Rotation matrix 
  • Quaternions

Euler-Angle, Angle-Axis  have Singularities! 

Rotation Matrix 
  • 9 scalars, more complex regularization 
  • Concatenation: 27 multiplications
  • Rotating a vector: 9 multiplications


Quaternion
  • 4 scalars, easy regularization
  • Concatenation: 16 multiplications
  • Rotating a vector: 18 multiplications 

Compared to Euler angles they are simpler to compose and avoid the problem of gimbal lock. Compared to rotation matrices they are more compact, more numerically stable, and more efficient. ... When used to represent rotation, unit quaternions are also called rotation quaternions.

Quaternions and spatial rotation - Wikipedia

https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation

William Hamilton invented quaternions in 1843 as a method to allow him to multiply and divide vectors, rotating and stretching them.

Alternative to Euler and Dot Products. http://en.wikipedia.org/wiki/Dot_product
Quaternions are an expansion of compex numbers. A quaternion has three imaginary elements: $i$, $j$ and $k$ and can be written in the form:
   $\tilde{Q} = q_w + q_x i + q_y j + q_z k$


OpenSCAD

Singularities
One must be aware of singularities in the Euler angle parametrization when the pitch approaches ±90° (north/south pole). These cases must be handled specially. The common name for this situation is gimbal lock.
Code to handle the singularities is derived on this site: www.euclideanspace.com

A sensor fusion algorithm for an integrated angular position estimation with inertial measurement units

A Gyro-Free Quaternion based Attitude Determination system suitable or implementation using low cost sensors.

Orientation estimation using a quaternion-based indirect Kalman filter with adaptive estimation of external acceleration

Videos

https://youtu.be/d4EgbgTm0Bg What are quaternions, and how do you visualize them? A story of four dimensions.


https://www.youtube.com/watch?v=dttFiVn0rvc Math for Game Developers - Axis-Angle Rotation
https://www.youtube.com/watch?v=SCbpxiCN0U0 Math for Game Developers - Rotation Quaternions
https://www.youtube.com/watch?v=A6A0rpV9ElA Math for Game Developers - Quaternion Inverse
https://www.youtube.com/watch?v=CRiR2eY5R_s Math for Game Developers - Multiplying Quaternions
https://www.youtube.com/watch?v=Ne3RNhEVSIE Math for Game Developers - Quaternions and Vectors
https://www.youtube.com/watch?v=x1aCcyD0hqE Math for Game Developers - Slerping Quaternions Spherical Linear interpolation.
https://www.youtube.com/watch?v=fRSaaLtYj68 Math for Game Developers - Quaternion Wrapup and Review


https://www.youtube.com/watch?v=dul0mui292Q Math for Game Developers - Perspective Matrix Part 2
https://www.youtube.com/watch?v=jeO_ytN_0kk Math for Game Developers - Perspective Matrix

https://www.youtube.com/watch?v=8gST0He4sdE Hand Calculation of Quaternion Rotation
https://www.youtube.com/watch?v=KdW9ALJMk7s Quaternions Explained by Dan

https://www.youtube.com/watch?v=0_XoZc-A1HU FamousMathProbs13b: The rotation problem and Hamilton's discovery of quaternions (II)

https://www.youtube.com/watch?v=d4EgbgTm0Bg  What are quaternions, and how do you visualize them? A story of four dimensions.