Thursday, December 13, 2018

AWS RoboMaker: Robot Operating System (ROS), with connectivity to cloud services.
AWS RoboMaker

Easily develop, test, and deploy intelligent robotics applications

AWS RoboMaker is a service that makes it easy to develop, test, and deploy intelligent robotics applications at scale. RoboMaker extends the most widely used open-source robotics software framework, Robot Operating System (ROS), with connectivity to cloud services. This includes AWS machine learning services, monitoring services, and analytics services that enable a robot to stream data, navigate, communicate, comprehend, and learn. RoboMaker provides a robotics development environment for application development, a robotics simulation service to accelerate application testing, and a robotics fleet management service for remote application deployment, update, and management.

Robots are machines that sense, compute, and take action. Robots need instructions to accomplish tasks, and these instructions come in the form of applications that developers code to determine how the robot will behave. Receiving and processing sensor data, controlling actuators for movement, and performing a specific task are all functions that are typically automated by these intelligent robotics applications. Intelligent robots are being increasingly used in warehouses to distribute inventory, in homes to carry out tedious housework, and in retail stores to provide customer service. Robotics applications use machine learning in order to perform more complex tasks like recognizing an object or face, having a conversation with a person, following a spoken command, or navigating autonomously. Until now, developing, testing, and deploying intelligent robotics applications was difficult and time consuming. Building intelligent robotics functionality using machine learning is complex and requires specialized skills. Setting up a development environment can take each developer days and building a realistic simulation system to test an application can take months due to the underlying infrastructure needed. Once an application has been developed and tested, a developer needs to build a deployment system to deploy the application into the robot and later update the application while the robot is in use.

AWS RoboMaker provides the tools to make building intelligent robotics applications more accessible, a fully managed simulation service for quick and easy testing, and a deployment service for lifecycle management. AWS RoboMaker removes the heavy lifting from each step of robotics development so you can focus on creating innovative robotics applications.

What is AWS RoboMaker?

How it works

AWS RoboMaker provides four core capabilities for developing, testing, and deploying intelligent robotics applications.
Cloud Extensions for ROS

Robot Operating System, or ROS, is the most widely used open source robotics software framework, providing software libraries that help you build robotics applications. AWS RoboMaker provides cloud extensions for ROS so that you can offload to the cloud the more resource-intensive computing processes that are typically required for intelligent robotics applications and free up local compute resources. These extensions make it easy to integrate with AWS services like Amazon Kinesis Video Streams for video streaming, Amazon Rekognition for image and video analysis, Amazon Lex for speech recognition, Amazon Polly for speech generation, and Amazon CloudWatch for logging and monitoring. RoboMaker provides each of these cloud service extensions as open source ROS packages, so you can build functions on your robot by taking advantage of cloud APIs, all in a familiar software framework.
Development Environment

AWS RoboMaker provides a robotics development environment for building and editing robotics applications. The RoboMaker development environment is based on AWS Cloud9, so you can launch a dedicated workspace to edit, run, and debug robotics application code. RoboMaker's development environment includes the operating system, development software, and ROS automatically downloaded, compiled, and configured. Plus, RoboMaker cloud extensions and sample robotics applications are pre-integrated in the environment, so you can get started in minutes.

Simulation is used to understand how robotics applications will act in complex or changing environments, so you don’t have to invest in expensive hardware and set up of physical testing environments. Instead, you can use simulation for testing and fine-tuning robotics applications before deploying to physical hardware. AWS RoboMaker provides a fully managed robotics simulation service that supports large scale and parallel simulations, and automatically scales the underlying infrastructure based on the complexity of the simulation. RoboMaker also provides pre-built virtual 3D worlds such as indoor rooms, retail stores, and race tracks so you can download, modify, and use these worlds in your simulations, making it quick and easy to get started.
Fleet Management

Once an application has been developed or modified, you’d build an over-the-air (OTA) system to securely deploy the application into the robot and later update the application while the robot is in use. AWS RoboMaker provides a fleet management service that has robot registry, security, and fault-tolerance built-in so that you can deploy, perform OTA updates, and manage your robotics applications throughout the lifecycle of your robots. You can use RoboMaker fleet management to group your robots and update them accordingly with bug fixes or new features, all with a few clicks in the console.


Get started quickly

AWS RoboMaker includes sample robotics applications to help you get started quickly. These provide the starting point for the voice command, recognition, monitoring, and fleet management capabilities that are typically required for intelligent robotics applications. Sample applications come with robotics application code (instructions for the functionality of your robot) and simulation application code (defining the environment in which your simulations will run). The sample simulation applications come with pre-built worlds such as indoor rooms, retail stores, and racing tracks so you can get started in minutes. You can modify and build on the code of the robotics application or simulation application in the development environment or use your own custom applications.

Build intelligent robots

Because AWS RoboMaker is pre-integrated with popular AWS analytics, machine learning, and monitoring services, it’s easy to add functions like video streaming, face and object recognition, voice command and response, or metrics and logs collection to your robotics application. RoboMaker provides extensions for cloud services like Amazon Kinesis (video stream), Amazon Rekognition (image and video analysis), Amazon Lex (speech recognition), Amazon Polly (speech generation), and Amazon CloudWatch (logging and monitoring) to developers who are using Robot Operating System, or ROS. These services are exposed as ROS packages so that you can easily use them to build intelligent functions into your robotics applications without having to learn a new framework or programming language.

Lifecycle management

Manage the lifecycle of a robotics application from building and deploying the application, to monitoring and updating an entire fleet of robots. Using AWS RoboMaker fleet management, you can deploy an application to a fleet of robots. Using the CloudWatch metrics and logs extension for ROS, you can monitor these robots throughout their lifecycle to understand CPU, speed, memory, battery, and more. When a robot needs an update, you can use RoboMaker simulation for regression testing before deploying the fix or new feature through RoboMaker fleet management.

Thursday, October 11, 2018

Fwd: OpenMV News

---------- Forwarded message ----------
From: "OpenMV" <>
Date: Oct 10, 2018 11:53 PM
Subject: OpenMV News
To: "John" <>

OpenMV Home -
View this email in your browser

Better CMSIS-NN Support


Hi folks - time for a short update,

First, thanks to everyone who's backed our OpenMV Cam H7 Kickstarter! We've raised 70K for the OpenMV Cam H7 now! Awesome! Anyway, If you haven't backed us yet please do! We've still got a few days left on the Kickstarter.

Next, I spent some time updating the CMSIS-NN examples on the OpenMV Cam Github. We now have a README that walks you through how to use the library with exact command line values to run:

With this new guide and a deep-learning machine you can now actually train networks. You can run all networks on the OpenMV Cam H7. For the OpenMV M7 only the smile and cifar10_fast networks are small enough to be runnable. In particular, networks need to be no more than 30KB or so. Anyway, if you want to create your own custom CNN now you can do so by following our README walk through on how we made our smile detection CNN. Once you've got a deep-learning rig and caffe installed then training a new network is very easy.

Finally, there was a bug in the CMSIS-NN code from ARM that has now been fixed which was previously causing issues with running your own CNN. It has now been fixed on the master of the OpenMV Cam GitHub.

Anyway, we're going to try to get an IDE release out with all these fixes along with new CNN examples now that we've documented how to do things.

Thanks for reading,

Copyright © 2018 OpenMV, LLC, All rights reserved.
You are receiving this email because you provided your email address to OpenMV at some point in time. Please opt out if you do not want to receive communications from OpenMV.

Our mailing address is:
PO Box 1577 #22900
Atlanta, GA 30301

Add us to your address book

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp