Thursday, October 11, 2018

Fwd: OpenMV News

---------- Forwarded message ----------
From: "OpenMV" <>
Date: Oct 10, 2018 11:53 PM
Subject: OpenMV News
To: "John" <>

OpenMV Home -
View this email in your browser

Better CMSIS-NN Support


Hi folks - time for a short update,

First, thanks to everyone who's backed our OpenMV Cam H7 Kickstarter! We've raised 70K for the OpenMV Cam H7 now! Awesome! Anyway, If you haven't backed us yet please do! We've still got a few days left on the Kickstarter.

Next, I spent some time updating the CMSIS-NN examples on the OpenMV Cam Github. We now have a README that walks you through how to use the library with exact command line values to run:

With this new guide and a deep-learning machine you can now actually train networks. You can run all networks on the OpenMV Cam H7. For the OpenMV M7 only the smile and cifar10_fast networks are small enough to be runnable. In particular, networks need to be no more than 30KB or so. Anyway, if you want to create your own custom CNN now you can do so by following our README walk through on how we made our smile detection CNN. Once you've got a deep-learning rig and caffe installed then training a new network is very easy.

Finally, there was a bug in the CMSIS-NN code from ARM that has now been fixed which was previously causing issues with running your own CNN. It has now been fixed on the master of the OpenMV Cam GitHub.

Anyway, we're going to try to get an IDE release out with all these fixes along with new CNN examples now that we've documented how to do things.

Thanks for reading,

Copyright © 2018 OpenMV, LLC, All rights reserved.
You are receiving this email because you provided your email address to OpenMV at some point in time. Please opt out if you do not want to receive communications from OpenMV.

Our mailing address is:
PO Box 1577 #22900
Atlanta, GA 30301

Add us to your address book

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Monday, September 24, 2018

Goertzel filter

Many applications require the detection of a few discrete sinusoids. The Goertzel filter is an IIR filter that uses the feedback to generate a very high Q bandpass filter where the coefficients are easily generated from the required centre frequency, according to the following equations. The most common configuration for using this technique is to measure the signal energy before and after the filter and to compare the two. If the energies are similar then the input signal is centred in the pass-band, if the output energy is significantly lower than the input energy then the signal is outside the pass band. The Goertzel algorithm is most commonly implemented as a second order recursive IIR filter, as shown below.

Matlab Code
function Xk = goertzel_non_integer_k(x,k)
%   Computes an N-point DFT coefficient for a 
%   real-valued input sequence 'x' where the center 
%   frequency of the DFT bin is k/N radians/sample.  
%   N is the length of the input sequence 'x'.  
%   Positive-valued frequency index 'k' need not be
%   an integer but must be in the range of 0 –to- N-1.
%   [Richard Lyons, Oct. 2013]
N = length(x);
Alpha = 2*pi*k/N;
Beta = 2*pi*k*(N-1)/N;
% Precompute network coefficients
Two_cos_Alpha = 2*cos(Alpha);
a = cos(Beta);
b = -sin(Beta);
c = sin(Alpha)*sin(Beta) -cos(Alpha)*cos(Beta);
d = sin(2*pi*k);
% Init. delay line contents
w1 = 0;
w2 = 0;
for n = 1:N % Start the N-sample feedback looping
    w0 = x(n) + Two_cos_Alpha*w1 -w2;
    % Delay line data shifting
      w2 = w1;
      w1 = w0;
Xk = w1*a + w2*c + j*(w1*b +w2*d);

Quiet Beacon

Low-powered beacon transmitter/receiver which can be used either on its own or in addition to libquiet. The transmitter creates a simple sine tone at a specified frequency, and the receiver uses the '''Goertzel Algorithm''' to detect the presence of the tone with complexity O(n) for n samples.

Saturday, September 22, 2018

NaviPack LiDAR Navigation Module

NaviPack makes any device smarter and easier to control. It uses the latest LiDAR technology and powerful APIs to create easy solutions for automated devices.

With the built-in SLAM algorithm chip, Navipack is the first plug-and-play type of LIDAR navigation module. NaviPack is also the most affordable LiDAR solution for drones, robots and other devices and instantly enables them with powerful 360-degree sensing capabilities

NaviPack integrates the SLAM algorithm with the LiDAR sensor module, making it super easy to use and significantly reducing development time.

NaviPack performs 360 degree scanning of its surroundings and all objects up to 15 meters away with a frequency of 4000 points per second. It is super easy to use! With the built-in SLAM module, it will start working immediately after plugging into your devices - scanning the environment, building a detailed map, and enabling auto-moving capability.

 navipack ks explosion.jpg

Wednesday, September 12, 2018

Keypoints in computer vision - OpenCV3 techniques

OpenCV3 - Keypoints in Computer Vision by Dr. Adrian Kaehler, Ph.D.

Another good talk from him,

Future Talk #91 - Machine Vision, Deep Learning and Robotics
A discussion of machine vision, deep learning and robotics with Adrian Kaehler, founder and CEO of Giant.AI and founder of the Silicon Valley Deep Learning Group