The Cha-Cha of Progress

Progress is rarely linear.

Bubbles have been written about in economic circles since probably before the great tulip boom of 1637. The dot-com boom of the 90’s followed the classic pattern, which goes like:

  1. The world has changed! I’ve got to get in on this! Everything we knew was wrong because now XYZ has changed the fundamental rules of the game.

    Everything becomes overbought, but at the time, this seems like a rational and sound decision. (see also: Internet stocks in 1999, Bitcoin in Dec 2013)

  2. We were wrong, and boy, were we ever. This is all smoke and mirrors. There’s very little of value in this – sell now before it’s too late.

    The market crashes below it’s intrinsic value. If you missed the first pop, this is when to buy. (see also: Internet stocks in 2001, Bitcoin in July 2015)

  3. Ok, this is indeed useful. The world has changed, but not as much as we thought. Optimism returns, albeit tempered. (see also: Internet now, Bitcoin now)

I believe machine intelligence is beginning a bubble. Don’t get me wrong, machine intelligence / AI will indeed revolutionize our world.

From what I see though, we’re at a point of narrow, focused AI being truly useful at one specific task. We’re still a long way off general intelligence and having deep discussions about meaning.

One of my fundamental philosophies is that things are rarely as good or as bad as most people think. That said, one can definitely profit from the herd.

During times of heady optimism, sell shovels to gold rush miners. When the blood is in the streets, there’s value – good prices – on crown jewels.

Furthermore, we need the heady optimism. It causes the investment which really does create the long term value and growth.

One look at Nvidia ($NVDA) stock of late and you can see the enthusiasm for AI. Will it overshoot and drop? Almost certainly, but nobody knows how far or how fast that run-up will go. The company has a wealth of patents that may truly create substantial shareholder value for decades to come.

Politically, bubbles now occur in the same fashion. We all are surrounded by social media of our own choosing, via direct subscription, or indirectly via click tracking, polarizing human attention into camps.

The dark ages are a reminder that human progress is never linear, and yet, the world, slowly, is indeed getting better – just not in a smooth predictable line.


Source + more reading:

AI for Business – the Disruption Kit

Until recently, the ability to distill meaning, patterns, and relative importance have been uniquely human tasks.

At SwiftCloud, the next generation of our code is built to  handle 9 billion human record datasets, i.e. the population  of the planet. While this may sound ambitious, it’s the inevitable outcome of any marketing software. Tech-heads like me see not just massive data as inevitable, but new correlations within it as beautiful, fascinating, and a land of limitless possibility.

As you can see in this video from Palantir, software is moving up the pattern-recognition and thus meaning-extrapolation quickly. Moore’s law means AI gets exponentially closer, not linearly.

It’s probably no secret to think every business on the planet will be affected by a transition to machine-intelligence driven decisions, and it’s happening faster than you think. This is the “internet in 1999” level close. Buy Intel Stock, because we’re all about to start consuming CPU time by factors of 10x or 100x distilling this data.

From a programming perspective, true intelligence is a solvable problem. It’s an input sent into a hierarchical decision tree which itself is a recursively self-optimized nonlinear weighted signals series of trees based on previous dataset multiplied by bias or values.

So by adding a testing mechanism of input and output data, any rule can quickly be recursively self optimized, further refined based on nonlinear value weighting.

Let’s walk through “what should I eat for lunch? – but the input question can be anything.. what stock to buy / sell / short / option, who to hire, etc.

Step 1: Pick a few signals and ballpark a weighted importance*

*In the real world, importance is usually nonlinear, usually following a few simple math equations (i.e. binary, exponential, logarithmic, etc.)

In the lunchtime decision, input factors include:

  • Procurement difficulty (time, effort…)
  • Health
  • Taste
  • Values (Vegan, etc.)
  • Cost
  • Social factors (eating alone? with a date to impress?)
  • Emotional status (stressed, depressed, or motivated and inspired to finally get that six pack?)

These are various priorities. If you have $50 in the bank, cost is a factor, but importance is logarithmic – having $50mil in the bank won’t likely change what you eat day to day vs. $5mil in the bank. Values like vegan are binary inputs – some foods are out.

Step 2: Create an Input-Output Feedback Loop

Here’s an example: In SwiftTasks, we monitor time-to-completion to estimate a given worker’s turnaround time based on task-category, weighted by previous accuracy scoring. In SwiftCRM, the overly optimistic sales rep who continually overestimates his closings by 83% will be discounted by 83% in the reporting to the manager / CEO.

This loop self corrects, as the sales rep learns from scoring, unless he/she is delusional. The goal of this loop is learning, and given more data, the simple average rule can be distilled with patterns – retail foot traffic following seasonal patterns, then later aligned with weather patterns from another dataset to predict store inventory needs or staffing requirements for the next 14 days. True AI for Business then chains this prediction to actual scheduling and inventory ordering. The inputs get increasingly automated, more accurate, and able to do more.

The loop is then closed when tied to employee time clocks and actual sales data to then correct the algorithm.

True AI for business will then consider other additional inputs on its own – traffic construction, macroeconomic data, marketing spend to get increasingly more accurate.

This is where more data is crucial, and independent small businesses and sole practitioners will be at a real disadvantage.

Capitalism is about to get even more polarized.

Step 3: Start Connecting Inputs (Signals) to Outputs (Automations)

Here’s where things get interesting – implementation. At SwiftCloud, we’re heavily into helping lead-gen businesses who sell offline (real estate, finance, etc.), so the desired output is quality leads. While ambitious business owners usually want to “floor it”, a healthier choice is to treat operational availability as an input, affected by lead time, which affects your media buys programmatically. Totally maxed out? Bring down your spend until you hire help, but the moment you hire help, kick it up again. While this is management 101, what’s new is the simplicity of managing these once the input (% of operational availability) is tied to the output (ad spend, thus affecting incoming leads and sales).

In staffing, things like someone updating their LinkedIn profile may be an indicator the person is thinking of leaving. Alone, it’s just one signal – but combined with others, say, anonymous candidates with similar skills, similar cities showing up online may correlate into an employee ready to move. If that’s your employee, trigger a retention script – conversations, bonus, personal attention, promotion, etc. – if you’re looking to hire, pounce quickly.

Step 4: Optimize toward Self-Optimization.

All this fancy sounding automation may start with a crude google doc, or hacked together php app at first – but any business can start moving in this direction, and profiting from it. For a while – maybe even a few years – you may be the connector, but you’ll have clear data on one side (the inputs), and easier controls on the other (the output).

Crafty coders (including me) can then connect beta software that starts in “Simulation Mode”. The software would calculate what it thinks should be done, and you can review, accept, decline, or be advised by it.

Important in any design is that the accuracy-score of each output can recursively affect the input-weighting, so that accuracy itself is another input into the meta-algorithm affecting the design of the primary algorithm.

We do this in the real world, via simple checks and balances, which shows up via disapproval, loss of money, discipline from parents, physical pain, etc. – a corrective input to modify the algorithm.

This is over simplified, of course, and the code will “fly off the rails” yielding useless data without things like value dampening and vector isolation, but if you’re still reading, you get the idea.

As the optimization loop flows through, any well designed system will become increasingly accurate, provided the signals are correct, and weighting starts at least in the ballpark.

Infants throw food on the floor (input), triggering an exciting and interesting reaction from parents (output), gravity, mess, leading to conclusions about their world, each of which builds on another. Auditory symbols are further symbolized into written symbols, leading to a hierarchy of ideas, when combined with value and bias, equal a human, whom we experience via input (approval experienced through a smile) based on output (a smile). While some believe in an ephemeral soul, it may well be we’re simply our “coding + a hard drive of experience”, and a soul could be simply a concept created by self aware intelligence uncomfortable with mortality. Self awareness and mortality are heady concepts to think a robot could comprehend – but that’s because humans are wired to predict in a linear fashion.

We might be in a computer already. Why not let our own computers do some of your work?