Monday, September 24, 2018

Goertzel filter




https://en.wikipedia.org/wiki/Goertzel_algorithm

Many applications require the detection of a few discrete sinusoids. The Goertzel filter is an IIR filter that uses the feedback to generate a very high Q bandpass filter where the coefficients are easily generated from the required centre frequency, according to the following equations. The most common configuration for using this technique is to measure the signal energy before and after the filter and to compare the two. If the energies are similar then the input signal is centred in the pass-band, if the output energy is significantly lower than the input energy then the signal is outside the pass band. The Goertzel algorithm is most commonly implemented as a second order recursive IIR filter, as shown below.



https://github.com/jacobrosenthal/Goertzel

Matlab Code
function Xk = goertzel_non_integer_k(x,k)
%   Computes an N-point DFT coefficient for a 
%   real-valued input sequence 'x' where the center 
%   frequency of the DFT bin is k/N radians/sample.  
%   N is the length of the input sequence 'x'.  
%   Positive-valued frequency index 'k' need not be
%   an integer but must be in the range of 0 –to- N-1.
 
%   [Richard Lyons, Oct. 2013]
 
N = length(x);
Alpha = 2*pi*k/N;
Beta = 2*pi*k*(N-1)/N;
 
% Precompute network coefficients
Two_cos_Alpha = 2*cos(Alpha);
a = cos(Beta);
b = -sin(Beta);
c = sin(Alpha)*sin(Beta) -cos(Alpha)*cos(Beta);
d = sin(2*pi*k);
 
% Init. delay line contents
w1 = 0;
w2 = 0;
 
for n = 1:N % Start the N-sample feedback looping
    w0 = x(n) + Two_cos_Alpha*w1 -w2;
    % Delay line data shifting
      w2 = w1;
      w1 = w0;
end
 
Xk = w1*a + w2*c + j*(w1*b +w2*d);


Quiet Beacon

Low-powered beacon transmitter/receiver which can be used either on its own or in addition to libquiet. The transmitter creates a simple sine tone at a specified frequency, and the receiver uses the '''Goertzel Algorithm''' to detect the presence of the tone with complexity O(n) for n samples.




Saturday, September 22, 2018

NaviPack LiDAR Navigation Module


https://www.indiegogo.com/projects/navipack-lidar-navigation-module-reinvented#/

https://www.youtube.com/watch?v=SBhIdXVnoZU&feature=share

https://robot.imscv.com/en/product/3D%20LIDAR


NaviPack makes any device smarter and easier to control. It uses the latest LiDAR technology and powerful APIs to create easy solutions for automated devices.

With the built-in SLAM algorithm chip, Navipack is the first plug-and-play type of LIDAR navigation module. NaviPack is also the most affordable LiDAR solution for drones, robots and other devices and instantly enables them with powerful 360-degree sensing capabilities

NaviPack integrates the SLAM algorithm with the LiDAR sensor module, making it super easy to use and significantly reducing development time.

NaviPack performs 360 degree scanning of its surroundings and all objects up to 15 meters away with a frequency of 4000 points per second. It is super easy to use! With the built-in SLAM module, it will start working immediately after plugging into your devices - scanning the environment, building a detailed map, and enabling auto-moving capability.


 navipack ks explosion.jpg








Wednesday, September 12, 2018

Keypoints in computer vision - OpenCV3 techniques

OpenCV3 - Keypoints in Computer Vision by Dr. Adrian Kaehler, Ph.D.




https://www.youtube.com/watch?v=tjuaZGvlBh4






Another good talk from him,


Future Talk #91 - Machine Vision, Deep Learning and Robotics

https://www.youtube.com/watch?v=kPq4lYGr7rE
A discussion of machine vision, deep learning and robotics with Adrian Kaehler, founder and CEO of Giant.AI and founder of the Silicon Valley Deep Learning Group

Feynman's technique





“I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. [It] showed how to differentiate parameters under the integral sign — it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. [If] guys at MIT or Princeton had trouble doing a certain integral, [then] I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.” (Surely you’re Joking, Mr. Feynman!)


 Alright, let's dive into this beautiful, underappreciated beast: the Leibniz Integral Rule, the unsung hero of integration, the "Feynman's technique" that ain't got shit to do with quantum squiggles. We're talking about differentiating under the integral sign, that slick move that can turn a seemingly impossible integral into something you can practically solve in your goddamn sleep.

So, what the hell is this Leibniz Integral Rule? In its guts, it's about figuring out how the result of a definite integral changes when the limits of integration or the integrand itself depends on some other variable, let's call it α.

Imagine you've got an integral that looks like this:


Here's the goddamn magic: the Leibniz Integral Rule tells us how to find the derivative of this whole shebang with respect to α, i.e., dαdI. And it's a beautiful piece of calculus that lets you swap the order of differentiation and integration under certain conditions.

The rule states that:



Let's break down this glorious mess piece by goddamn piece:

  1. : This is the partial derivative of the integrand f(x,α) with respect to the parameter α, treating x as a constant. This term accounts for how the integrand itself changes as α varies.

  2. : This part deals with the upper limit of integration, b(α), which can also be a function of α. We evaluate the integrand at the upper limit and multiply it by the derivative of the upper limit with respect to α. This tells us how the integral changes because the upper boundary is moving.

  3. : Similarly, this handles the lower limit of integration, a(α), which can also depend on α. We evaluate the integrand at the lower limit and multiply it by the derivative of the lower limit with respect to α. This tells us how the integral changes because the lower boundary is moving. Note the minus sign here – it's crucial because the lower limit is the "start" of the integration.

When the Limits are Constant:

Now, here's where it gets particularly slick for solving those stubborn integrals. If the limits of integration a and b are constants (i.e., they don't depend on α), then and . In this case, the Leibniz Integral Rule simplifies to the much cleaner form:


This is the damn magic that Feynman loved. You introduce a parameter α into your integrand in a clever way, differentiate under the integral sign with respect to α, solve the resulting (hopefully easier) integral, and then, in the end, you evaluate your result at a specific value of α that corresponds to your original integral.

The  Power:

The power of this technique lies in its ability to transform a difficult integral with respect to x into a potentially simpler integral with respect to x after differentiating with respect to a parameter α. Often, this differentiation with respect to α can bring down powers of x or introduce terms that make the integration with respect to x much more manageable. After you've integrated with respect to x, you're left with a function of α, which you then need to integrate with respect to α to get back to your original integral (plus a constant of integration that you might need to determine using a known value of the original integral for a specific α).

Conditions for Validity (Don't Be a Goddamn Fool):

Like any powerful tool, the Leibniz Integral Rule comes with some conditions you can't just ignore:

  1. Continuity of : The function f(x,α) must be continuous with respect to both x and α on the region of integration.

  2. Continuity of : The partial derivative of f with respect to α must also be continuous on the region of integration.

  3. Differentiability of Limits: If the limits of integration a(α) and b(α) depend on α, they must be differentiable with respect to α.

If these conditions are met, then you're goddamn golden to swap the order of differentiation and integration.

Why the Obscurity?

As Feynman pointed out, it's a damn shame this technique isn't emphasized more. Maybe it's because it requires a bit of ingenuity in introducing the parameter α. It's not a plug-and-chug method like some basic integration rules. It demands a bit of creative thinking, a sense of how a parameter might simplify the integrand. But when it works, it works like a goddamn charm, turning integrals that would make seasoned mathematicians sweat into elegant solutions.

So, the Leibniz Integral Rule, or "Feynman's technique" (the non-quantum kind, you hear me?), is a powerful and often overlooked tool in the integrator's arsenal. It's about understanding how integrals behave when their guts change, and it can be the goddamn key to unlocking some of the toughest mathematical puzzles out there. Don't you forget it.




Complex numbers, Quaternions and Octonions




https://en.wikipedia.org/wiki/Real_number   2^0  = 1 Dimention

https://en.wikipedia.org/wiki/Complex_number  2^1 = 2 Dimentions

https://en.wikipedia.org/wiki/Quaternion  2^2 = 4 Dimentions

https://en.wikipedia.org/wiki/Octonion  2^3 = 8 Dimentions

https://en.wikipedia.org/wiki/Sedenion  2^4  = 16 Dimentions

Trigintaduonions  2^5  = 32 Dimensions





In the construction of types of numbers, we have the following sequence:
RCHOS

or:


or:
"Reals"  "Complex"  "Quaternions"  "Octonions"  "Sedenions"
With the following "properties":
  • From R to C you gain "algebraic-closure"-ness (but you throw away ordering).
  • From C to H we throw away commutativity.
  • From H to O we throw away associativity.
  • From O to S we throw away multiplicative normedness.

Why am I talking about this, Well specifically Quaternions are of interest for Robotics. 

There are many different parameterizations for orientations:
  • Euler Angles 
  • Angle Axis
  • Rotation matrix 
  • Quaternions

Euler-Angle, Angle-Axis  have Singularities! 

Rotation Matrix 
  • 9 scalars, more complex regularization 
  • Concatenation: 27 multiplications
  • Rotating a vector: 9 multiplications


Quaternion
  • 4 scalars, easy regularization
  • Concatenation: 16 multiplications
  • Rotating a vector: 18 multiplications 

Compared to Euler angles they are simpler to compose and avoid the problem of gimbal lock. Compared to rotation matrices they are more compact, more numerically stable, and more efficient. ... When used to represent rotation, unit quaternions are also called rotation quaternions.

Quaternions and spatial rotation - Wikipedia

https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation

William Hamilton invented quaternions in 1843 as a method to allow him to multiply and divide vectors, rotating and stretching them.

Alternative to Euler and Dot Products. http://en.wikipedia.org/wiki/Dot_product
Quaternions are an expansion of compex numbers. A quaternion has three imaginary elements: $i$, $j$ and $k$ and can be written in the form:
   $\tilde{Q} = q_w + q_x i + q_y j + q_z k$


OpenSCAD

Singularities
One must be aware of singularities in the Euler angle parametrization when the pitch approaches ±90° (north/south pole). These cases must be handled specially. The common name for this situation is gimbal lock.
Code to handle the singularities is derived on this site: www.euclideanspace.com

A sensor fusion algorithm for an integrated angular position estimation with inertial measurement units

A Gyro-Free Quaternion based Attitude Determination system suitable or implementation using low cost sensors.

Orientation estimation using a quaternion-based indirect Kalman filter with adaptive estimation of external acceleration

Videos

https://youtu.be/d4EgbgTm0Bg What are quaternions, and how do you visualize them? A story of four dimensions.


https://www.youtube.com/watch?v=dttFiVn0rvc Math for Game Developers - Axis-Angle Rotation
https://www.youtube.com/watch?v=SCbpxiCN0U0 Math for Game Developers - Rotation Quaternions
https://www.youtube.com/watch?v=A6A0rpV9ElA Math for Game Developers - Quaternion Inverse
https://www.youtube.com/watch?v=CRiR2eY5R_s Math for Game Developers - Multiplying Quaternions
https://www.youtube.com/watch?v=Ne3RNhEVSIE Math for Game Developers - Quaternions and Vectors
https://www.youtube.com/watch?v=x1aCcyD0hqE Math for Game Developers - Slerping Quaternions Spherical Linear interpolation.
https://www.youtube.com/watch?v=fRSaaLtYj68 Math for Game Developers - Quaternion Wrapup and Review


https://www.youtube.com/watch?v=dul0mui292Q Math for Game Developers - Perspective Matrix Part 2
https://www.youtube.com/watch?v=jeO_ytN_0kk Math for Game Developers - Perspective Matrix

https://www.youtube.com/watch?v=8gST0He4sdE Hand Calculation of Quaternion Rotation
https://www.youtube.com/watch?v=KdW9ALJMk7s Quaternions Explained by Dan

https://www.youtube.com/watch?v=0_XoZc-A1HU FamousMathProbs13b: The rotation problem and Hamilton's discovery of quaternions (II)

https://www.youtube.com/watch?v=d4EgbgTm0Bg  What are quaternions, and how do you visualize them? A story of four dimensions.