# July 01, 2020

## Bounded Variable Sigmoid Mapping Function for Curving Random Numbers

### Math, Math Demonstration

So the other day I wrote about a curving function for random numbers that can weight them more toward high or low values. What if we want numbers more concentrated at a center? We can use a Box-Muller transform and get numbers with Gaussian distribution. The problem with that is the numbers are unbounded. That is, there is no limit to how much derivation from the center there can be. We want a fixed upper and lower limit. I've tried doing this in the past with a ranged Box-Muller transform but it isn't quite as configurable as I would like. Then I ran across this post on Stack Exchange. It is a variable Sigmoid function bound between 0 and 1. Here is an implementation:

Coefficient: unknown.
Standard deviation: unknown.

This is a fantastic function. For coefficient values less than 1 and greater than zero, the function exhibits Gausian distribution. This produces numbers around a center point (0.5) that can deviate to some maximum on either side (0 or 1) with control over how likely deviation (something akin to standard deviation). However, it is more versatile than that. For coefficients greater than 1, the function produces reverse Gaussian distribution, pushing the results to the sides and making the middle less likely. There are applications for both. However, let's first look at the transform function. The Stack Exchange post gives it as:

$y=1-\frac{1}{1+{\left(\frac{1}{x}-1\right)}^{-c}}$

But I like this form better as it is easier to read:

$y=\frac{1}{1+{\left({x}^{-1}-1\right)}^{c}}$

It is a bounded function but is actually undefined at the end-points. I could see why x=0 didn't work (divide-by-zero), but it wasn't initially obvious why x=1 didn't work. This identity makes it more obvious:

${x}^{c}={e}^{c\mathrm{ln}\left(x\right)}$

So if x=1, the value inside the parenthesis becomes 0, and you cannot take the natural log of 0 as it is undefined at 0.

Looking at the function it is easy to deduce that at the end point y=0 when x=0 and y=1 when x=1, but we can prove it mathematically:

$\begin{array}{c}\underset{x\to {0}^{\text{+}}}{\mathrm{lim}}\frac{1}{1+{\left({x}^{-1}-1\right)}^{c}}=0\\ \underset{x\to {1}^{\text{-}}}{\mathrm{lim}}\frac{1}{1+{\left({x}^{-1}-1\right)}^{c}}=1\end{array}$

So a complete definition would be:

$\begin{array}{c}y=\left\{\begin{array}{cc}0& x=0\\ \frac{1}{1+{\left({x}^{-1}-1\right)}^{c}}& 00\right\}\\ \left\{y\in \mathrm{ℝ}\mathrm{:}0=x=1\right\}\end{array}$

Now there must be a relation between the coefficient in this equation and the standard deviation. Sadly, I have was unable to find it before publishing this article.

# June 29, 2020

## Function for Curving Random Numbers

### Math Demonstration, Programming

Your typical Pseudo-Random Number Generator (PRNG) will generate uniform random numbers.  That is, equal odds of any number.  This is often normalized so the random numbers are between 0 and 1.  There are times we do not wish equal odds, but rather some known odds.  One way to do this is through a mapping function that will curve the numbers.  Here is a simple curving function demo:

Curve position: unknown
Coefficient: unknown
Standard deviation: unknown

The circle can be moved around to modify the curve and it also depicts odds at that point.  Try dragging it around.  The curve will always pass through the center of the circle by generating a coefficient.  The plot below the main graph is a histogram, showing the distribution of values.  Unless the distribution is uniform (a coefficient of 1) more values will by on one side as opposed to the other.

For example, if the point is at 0.4, 0.3, that means the 40% of all the random numbers will be below 0.3.  Conversely, 70% are above 0.3.  Since this is a one-to-one map transform we can simply plug in a normalized random number to generate a weighted random result.  Since the values are normalized we can scale it up (by multiplying the result by an amplitude constant) and/or offset it (adding a constant).

The function for the mapping is very simple.

$\begin{array}{c}y={x}^{c}\\ \left\{x\in \mathrm{ℝ}|0=x=1\right\}\\ \left\{c\in \mathrm{ℝ}|0

This function will map any x value between 0.0 and 1.0 which is the range of normalized PRNG.  The coefficient is any positive real number above zero.  This allows the output to be weighted more toward smaller or larger numbers.  To use the method of applying a fixed-point (as demonstrated by the circle on the plot), the following equation can be used:

$c=\frac{\mathrm{ln}y}{\mathrm{ln}x}$

Where x and y are the coordinators of the point on the curve.  This will guarantee a coefficient that  passes through the curve.

## From Noah

#### July 03, 2020 at 1:50 PM

Very interesting! The 1:1 mapping here makes a lot of sense, and I like how easy it is to just normalize across a new range.

I wonder if more complex probabilities could be created the same way - for example a normal distribution? Binomial? Rayleigh? As long as it's continuous there could be a lot of interesting ways to mess with PRNG output, should you need it.

# November 05, 2017

## Impulse Noise Filtering - Median Filter Demo

### Math Demonstration, Impulse Noise Filtering

In this demonstration there are three sliders. From left to right they are: signal noise, spike density, and filter window length. The signal noise is just basic white noise added to the signal. The spike density is the percentage of impulse noise added to the signal. And the filter window is how many samples are used by the median filter.

Shown in red is the raw signal, complete with spikes. In blue is the median filtered signal. The goal is to have the blue signal from a smooth trace with no spikes, and the default settings should produce this. Increasing the spike density will require a larger filter window.

Just as with an average filter, the median filter will induce phase shift as well as attenuate the signal. The larger the filter window the larger the phase shift and the more attenuation. Unique to the median filter is the creation of plateaus around signal peaks. This is because of the phase shift and the changing direction of the signal lags during peak values.

# July 05, 2017

## N-Queens Demonstration

### Math Demonstration, Programming, 8 queens puzzle

In implementing the 8 queens puzzle I decided to crate a Javascript interface that allows one to try and place queens on the chessboard of arbitrary size.

By clicking on a cell a queen is placed or removed. Highlighted cells show the threats created by placing a queen at that location. Yellow cells show that the highlighted area poses a threat. Red cells show the board is in conflict. A fully green board shows a solution.

One can generally find solutions to the smaller boards pretty quickly. The larger boards get very complected but also have more possible solutions. The solve button will use the method of permutations to try and find a random solution. For the larger boards it may not find a solution and will time out after 30 seconds. However, trying again will eventually produce a result and sometimes quite fast. It really depends on how the board was shuffled.  The total number of solutions to test for any board is n! where n is the number of rows/columns.

Rows/Columns Total Permutations
4 24
5 120
6 720
7 5,040
8 40,320
9 362,880
10 3,628,800
11 39,916,800
12 479,001,600
13 6,227,020,800
14 87,178,291,200
15 1,307,674,368,000
16 20,922,789,888,000
17 355,687,428,096,000
18 6,402,373,705,728,000
19 121,645,100,408,832,000
20 2,432,902,008,176,640,000

That last number is 2.4 quintillion. For comparison a computer counting at 4 billion counts/second (4 GHz) would take 19 years just to count that high.