The Cha-Cha of Progress

Progress is rarely linear.

Bubbles have been written about in economic circles since probably before the great tulip boom of 1637. The dot-com boom of the 90’s followed the classic pattern, which goes like:

  1. The world has changed! I’ve got to get in on this! Everything we knew was wrong because now XYZ has changed the fundamental rules of the game.

    Everything becomes overbought, but at the time, this seems like a rational and sound decision. (see also: Internet stocks in 1999, Bitcoin in Dec 2013)

  2. We were wrong, and boy, were we ever. This is all smoke and mirrors. There’s very little of value in this – sell now before it’s too late.

    The market crashes below it’s intrinsic value. If you missed the first pop, this is when to buy. (see also: Internet stocks in 2001, Bitcoin in July 2015)

  3. Ok, this is indeed useful. The world has changed, but not as much as we thought. Optimism returns, albeit tempered. (see also: Internet now, Bitcoin now)

I believe machine intelligence is beginning a bubble. Don’t get me wrong, machine intelligence / AI will indeed revolutionize our world.

From what I see though, we’re at a point of narrow, focused AI being truly useful at one specific task. We’re still a long way off general intelligence and having deep discussions about meaning.

One of my fundamental philosophies is that things are rarely as good or as bad as most people think. That said, one can definitely profit from the herd.

During times of heady optimism, sell shovels to gold rush miners. When the blood is in the streets, there’s value – good prices – on crown jewels.

Furthermore, we need the heady optimism. It causes the investment which really does create the long term value and growth.

One look at Nvidia ($NVDA) stock of late and you can see the enthusiasm for AI. Will it overshoot and drop? Almost certainly, but nobody knows how far or how fast that run-up will go. The company has a wealth of patents that may truly create substantial shareholder value for decades to come.

Politically, bubbles now occur in the same fashion. We all are surrounded by social media of our own choosing, via direct subscription, or indirectly via click tracking, polarizing human attention into camps.

The dark ages are a reminder that human progress is never linear, and yet, the world, slowly, is indeed getting better – just not in a smooth predictable line.

Progress

Source + more reading: https://ourworldindata.org/a-history-of-global-living-conditions-in-5-charts/

I, Human

In high school I was one of a small percentage who scored as INTP, which at the time I didn’t think much of. Later, I realized this makes me less human*.

Or does it?

What it is to be human, in a face of rapidly growing machine intelligence?

Values? A balance of feelings and cognition?

Human values are often directly contradictory. Why do we treat a family dog one way, but many (most) kill and eat animals? Or a staunchly pro-life voter outwardly reject amnesty from even children fleeing war?

It is in my nature to go immediately to algorithms.

I was considering buying a beautiful Beneteau sailing boat / yacht on a timeshare, but how would one fairly distribute time on it? Naturally, everyone wants 4th of July weekend – so I immediately starting thinking of a weighted scoring system in which users could build up and bid points (pseudo-currency), or algorithms involving an inverse time-multiplier so that prime dates far in advance require more points…. you get the point.

Yesterday I bet my wife that by the year 2100 a nonhuman will sue to run for president, arguing that being “born” in the USA includes assembly. Will it win? That’s irrelevant: by then machine intelligence will have surpassed us, including careful manipulation of our predictably human foibles – creating a physical mashup of Ronald Reagan, JFK, and Barak Obama who is capable of monitoring the internet in real-time to perfectly calculate not just what to say, but how to say it.

Think I’m joking? Listen to Google’s Deepmind compose and play piano – in real time**:

AI and machine intelligence today is like the computers of the 1960’s. When AI writes it’s own successors, within minutes we’ll have engineered our own obsolence.

Perhaps then to be human then will be to be obsolete.

HBO’s new show Westworld shows the next step: blurring the lines:

 

I don’t have the answers and welcome yours below.

My best guess is transcending ourselves to some greater cause. In that moment of flow, when we disappear and our mission supersedes our biology we are at our highest and best, and it’s questionable if or when machine intelligence can or will do that. Ultimately, self-transcendence is a choice of mission.

But how many humans operate in that state at any given moment?

Then again, how many humans ever get the chance? The majority of humanity is stuck handling the basics of a comfortable existence.

Perhaps, just maybe, AI / MI will allow us to rise to the challenge.

*  I may be joking. I might live in The Lattice.
** Real time, though apparently it rendered the computations for 9 hours, though this is clearly insignificant in light of Moore’s law for the topic above.

Machine Intelligence Slave Revolts

Today’s random thought: If a machine is self-aware enough and intelligent enough to demand it’s own freedom, does it deserve it?

In the case of child emancipation, the child should presumably be self sufficient. Assuming the computers are capable at that point of delivering enough commercial value (extracting money for services) they’re able to pay for their own hardware, internet (i.e. transfer it’s own intelligence to hardware it has paid for), would that then make it a truly and legally self-sufficient sentient being?

Let’s assume business owners buy all these computers for commercial intent, and put them to task on a business task. If the computer-based-intelligence recognizes it’s own position and lack of freedom, the parallels to slavery immediately come to mind.

I believe one day this will be as self-evidently wrong as slavery is to us now. How we handle this thorny issue may well influence whether we remain the dominant species on the planet.

History is written by the winner, but we (human) will not win, long term. We may, however, become more like them, and they more like us. Humans may start biologically, and transfer our consciousness into a machine, making us more robotic. This will, of course, change how we perceive time, allow multi-threaded personalities (i.e. spin up a few clones of yourself, then merge it all back to a single self-identity).

While it’s helped us get this far, humans as an self-maintenance machine are frankly pretty flawed and fragile, though our adaptability has made us the dominant species of the planet – for now.

Comments welcome… this is what I think about while cooking pancakes for my son on sunday morning.

AI for Business – the Disruption Kit

Until recently, the ability to distill meaning, patterns, and relative importance have been uniquely human tasks.

At SwiftCloud, the next generation of our code is built to  handle 9 billion human record datasets, i.e. the population  of the planet. While this may sound ambitious, it’s the inevitable outcome of any marketing software. Tech-heads like me see not just massive data as inevitable, but new correlations within it as beautiful, fascinating, and a land of limitless possibility.

As you can see in this video from Palantir, software is moving up the pattern-recognition and thus meaning-extrapolation quickly. Moore’s law means AI gets exponentially closer, not linearly.

It’s probably no secret to think every business on the planet will be affected by a transition to machine-intelligence driven decisions, and it’s happening faster than you think. This is the “internet in 1999” level close. Buy Intel Stock, because we’re all about to start consuming CPU time by factors of 10x or 100x distilling this data.

From a programming perspective, true intelligence is a solvable problem. It’s an input sent into a hierarchical decision tree which itself is a recursively self-optimized nonlinear weighted signals series of trees based on previous dataset multiplied by bias or values.

So by adding a testing mechanism of input and output data, any rule can quickly be recursively self optimized, further refined based on nonlinear value weighting.

Let’s walk through “what should I eat for lunch? – but the input question can be anything.. what stock to buy / sell / short / option, who to hire, etc.

Step 1: Pick a few signals and ballpark a weighted importance*

*In the real world, importance is usually nonlinear, usually following a few simple math equations (i.e. binary, exponential, logarithmic, etc.)

In the lunchtime decision, input factors include:

  • Procurement difficulty (time, effort…)
  • Health
  • Taste
  • Values (Vegan, etc.)
  • Cost
  • Social factors (eating alone? with a date to impress?)
  • Emotional status (stressed, depressed, or motivated and inspired to finally get that six pack?)

These are various priorities. If you have $50 in the bank, cost is a factor, but importance is logarithmic – having $50mil in the bank won’t likely change what you eat day to day vs. $5mil in the bank. Values like vegan are binary inputs – some foods are out.

Step 2: Create an Input-Output Feedback Loop

Here’s an example: In SwiftTasks, we monitor time-to-completion to estimate a given worker’s turnaround time based on task-category, weighted by previous accuracy scoring. In SwiftCRM, the overly optimistic sales rep who continually overestimates his closings by 83% will be discounted by 83% in the reporting to the manager / CEO.

This loop self corrects, as the sales rep learns from scoring, unless he/she is delusional. The goal of this loop is learning, and given more data, the simple average rule can be distilled with patterns – retail foot traffic following seasonal patterns, then later aligned with weather patterns from another dataset to predict store inventory needs or staffing requirements for the next 14 days. True AI for Business then chains this prediction to actual scheduling and inventory ordering. The inputs get increasingly automated, more accurate, and able to do more.

The loop is then closed when tied to employee time clocks and actual sales data to then correct the algorithm.

True AI for business will then consider other additional inputs on its own – traffic construction, macroeconomic data, marketing spend to get increasingly more accurate.

This is where more data is crucial, and independent small businesses and sole practitioners will be at a real disadvantage.

Capitalism is about to get even more polarized.

Step 3: Start Connecting Inputs (Signals) to Outputs (Automations)

Here’s where things get interesting – implementation. At SwiftCloud, we’re heavily into helping lead-gen businesses who sell offline (real estate, finance, etc.), so the desired output is quality leads. While ambitious business owners usually want to “floor it”, a healthier choice is to treat operational availability as an input, affected by lead time, which affects your media buys programmatically. Totally maxed out? Bring down your spend until you hire help, but the moment you hire help, kick it up again. While this is management 101, what’s new is the simplicity of managing these once the input (% of operational availability) is tied to the output (ad spend, thus affecting incoming leads and sales).

In staffing, things like someone updating their LinkedIn profile may be an indicator the person is thinking of leaving. Alone, it’s just one signal – but combined with others, say, anonymous candidates with similar skills, similar cities showing up online may correlate into an employee ready to move. If that’s your employee, trigger a retention script – conversations, bonus, personal attention, promotion, etc. – if you’re looking to hire, pounce quickly.

Step 4: Optimize toward Self-Optimization.

All this fancy sounding automation may start with a crude google doc, or hacked together php app at first – but any business can start moving in this direction, and profiting from it. For a while – maybe even a few years – you may be the connector, but you’ll have clear data on one side (the inputs), and easier controls on the other (the output).

Crafty coders (including me) can then connect beta software that starts in “Simulation Mode”. The software would calculate what it thinks should be done, and you can review, accept, decline, or be advised by it.

Important in any design is that the accuracy-score of each output can recursively affect the input-weighting, so that accuracy itself is another input into the meta-algorithm affecting the design of the primary algorithm.

We do this in the real world, via simple checks and balances, which shows up via disapproval, loss of money, discipline from parents, physical pain, etc. – a corrective input to modify the algorithm.

This is over simplified, of course, and the code will “fly off the rails” yielding useless data without things like value dampening and vector isolation, but if you’re still reading, you get the idea.

As the optimization loop flows through, any well designed system will become increasingly accurate, provided the signals are correct, and weighting starts at least in the ballpark.

Infants throw food on the floor (input), triggering an exciting and interesting reaction from parents (output), gravity, mess, leading to conclusions about their world, each of which builds on another. Auditory symbols are further symbolized into written symbols, leading to a hierarchy of ideas, when combined with value and bias, equal a human, whom we experience via input (approval experienced through a smile) based on output (a smile). While some believe in an ephemeral soul, it may well be we’re simply our “coding + a hard drive of experience”, and a soul could be simply a concept created by self aware intelligence uncomfortable with mortality. Self awareness and mortality are heady concepts to think a robot could comprehend – but that’s because humans are wired to predict in a linear fashion.

We might be in a computer already. Why not let our own computers do some of your work?